I know it's early days on this, but compilation speed is the downside to Rust IMO. Having worked in a Rust monorepo, my number one complaint was compilation speed. It made CI/CD more expensive and it could really slow down dev time if we needed to remove the cache (happened sometimes - not cargo's fault, actually it's a docker bug, but still).
It’s unlikely that it’ll ever get dramatically better. It’s already been heavily optimized, and the Rust compiler now has more parallelism than pretty much any other mainstream compiler. Language design choices make Rust more challenging to compile than a language (like Go) that is specifically designed for fast compilation.
I don't agree. There are a lot of things on the table, performance-wise.
1. The compiler could ship binary artifacts, which would avoid all compilation of build scripts/ proc macros, and allow those to be compiled with performance optimizations enabled. This would be huge on its own.
2. Cranelift could potentially improve backend codegen compile times significantly as well.
3. Link times are still suboptimal, mold is promising here.
We can definitely still get significant wins out of the compiler.
Pretty sure compile times can get cut in half (or better) with those changes.
Maybe, but even twice as fast would still make it a “slow compiling language” in comparison to a “fast compiling language” like Go or Pascal.
This is not a knock on Rust—I doubt it’s possible to do what Rust does—including zero overhead abstractions—in a fast compiling language. Go certainly pays a performance penalty with things like boxed generics.
Twice as fast (or more) is just what I'm aware of in terms of "things that are possible to do today but aren't the default/ would take work to hack in". I don't even know what other options there are beyond that.
But sure, twice as fast isn't fast, it's just faster. My point is that we're not at the point of serious diminishing returns, there's tons of stuff left to do.
If there was a magic pot of gold, it would be technically possible to precompile every crate version with every rustc version on every supported platform and distrivute those prebuilt rlibs to users through cargo. That would help with first compile times when using the standard tooling, and not just for proc-macros.
Different people with different use cases have different complains. I haven't quantified but have certainly seen various complaints about both cases from different people.
Go’s generics aren’t boxed. At least, not in the sense that Java’s are. For example, you can write generic functions that operate over slices of unboxed values.
Still worse than true monomorphized generics in C# which too has fast compilation times (by nature of being JIT-compiled, but AOT target is still faster than Rust once you download dependencies).
Go’s implementation strategy for generics essentially is monomophirization plus obvious code size optimizations (e.g. don’t generate different code for different pointer types given that they all have the same underlying representation). Do you have specific scenario in mind where Go’s implementation strategy carries a significant performance penalty? I think possibly there are some misconceptions in this thread about how Go’s implementation actually works.
It is a knock on Rust. The circumstances of Rust's state of existence in 2023, as a language created in this millennium but not in the last decade, are absurd.
> I doubt it’s possible to do what Rust does—including zero overhead abstractions—in a fast compiling language
People packaging releases for software written in Rust and others who are passive consumers and finding themselves downloading some project repo to compile from source for whatever reason (e.g. because the creators don't do binary releases themselves) don't need the Rustlang toolchain to do the things that active contributors to a given project (who want type system diagnostics, etc.) need from it.
I'd call this oversight a massive lack of imagination on the part of TPTB, but that would be wrong, because there is no need to imagine the differences between these use cases. They exist. An adequate toolchain for dealing with projects written in Rust—despite the deliberate decisions made during language design that led to these problems—does not.
> 2. Cranelift could potentially improve backend codegen compile times significantly as well.
I've been told that the Cranelift team (at least for the time being) doesn't have the intention on focusing on the optimizer to a degree where it would be competitive with LLVM's optimizers (which would also be a huge effort). So if you want faster compile times you would have to take significant performance hits (which for a lot of code compiled in CI is not a trade-off that people are willing to take).
Yes, to be clear, Cranelift would be suitable for dev and test builds, you'd likely use llvm for release builds. So in your CI builds you'll almost certainly stick to llvm.
Beyond specific optimization and implementation details of a compiler, the three variables of "compilation speed", "generated code optimization" and "language expressiveness" are fundamentally in tension. In order to move one axis you have to affect one or both of the other two.
It would be great it people would pay Rui to make mold versions for Windows and Mac, which ideally would be required before making it a part of the official Rust tool chain.
He did monetization the wrong way around, IMO. Most CI is on Linux, but most developers are on Windows or macOS, so he should've capitalized on the Linux builds being paid while the local developer builds on Windows and macOS being free.
I doubt that anyone cares all that much about linking times in the CI. And even if someone does, it's probably an individual developer or team, ie, someone without decision power to pay for something as niche as a linker.
Also, mold was designed as an alternative to gold / lld, therefore it would require to be open-source and free on their main platform: linux.
I care deeply about linking times on CI. It's very frustrating having your code all build and run tests locally just to wait a long time for it to pass all of the CI barriers. Plus, CI builds often go stale much faster, so you're looking at much longer build times without caches.
Sure, but you're not really contradicting me unless you're able to get your company to pay for faster tooling. And if you can, why haven't you already?
Well, yes, that is the crux of this argument, that one can convince their employer to use mold. Otherwise, what is the point of using it? Desktop users by and large will not notice a small 3-5% improvement in compile times while those that pay for CI will.
Well, CI is where the costs are, and if the application is big enough, even a few percent reduction via faster linking times would equate to lower costs, while in contrast, developers won't really care or notice a few percent reduction on their local machine.
It's AGPL on Linux now, and they sell commercial licenses for companies that won't touch that license, and they were contemplating earlier making mold only available under a non-free source available license like BSL, so there's no "requirement" as such that it be free and open source, even on Linux.
> Most CI is on Linux, but most developers are on Windows or macOS
Do you have any data on this ? Maybe that's industry dependent, but I hardly know any Windows (not even talking about macOS, that's almost nil) developers outside of video games and web dev. 100% of Rust devs I know use Linux, to keep on the subject.
Data that most people don't use Linux as their day-to-day desktop OS for development? I suppose you can just look at desktop Linux statistics, which shows <5% usage. In my experience, most use macOS, or Windows via WSL2, which does use Linux but I am not sure if that is actually reflected in any desktop OS statistics.
I agree with this assessment, despite the optimism of some others. C++ has had slow compile times since forever, and so will Rust. Rust does a lot more work at compile time than most other popular languages. And it's largely stuff that's fundamental to the language. For example, besides borrow checking, the de facto default way to do polymorphism/generic programming in Rust is at compile time via what is essentially code-gen. In Java if you write `void useFoo(Foo foo)`, it'll compile quickly and will use runtime polymorphism to make sure that the argument is a subtype of `Foo`; in Rust if you write `fn use_foo(foo: impl Foo)`, the compiler is going to spit out a `use_foo` definition for each concrete type that is passed to `use_foo`. That takes time.
That being said, I definitely find the trade-off worth it. Though, I've never been the kind of programmer that desires the constant iteration and feedback of something like "REPL driven development".
> C++ has had slow compile times since forever, and so will Rust.
Rust has a massive advantage, which is having a 'sanctioned' package manager and built-time capabilities. A huge part of Rust's slowdown is due to:
a) Having to compile build scripts
b) Those build scripts being built without optimizations (100s of times slower at runtime)
If cargo + crates.io supports pre-built dependencies that is a massive optimization.
This isn't theoretical or optimistic, it's just a fact - we can already see this by compiling build and proc macro crates with optimizations, it's just not the default and they still have to be compiled once. IF you remove that compilation time, again, it's not theoretical, it's turning N time spent on those deps into 0 time spent.
There is easily a 200% performance win available, just from the known optimizations that are on the table.
Rust has another advantage in the language itself- generic code can be type-checked and (partially) optimized before being instantiated.
When you export a generic function in C++, every file that pulls it in has to re-parse it, and every instantiation has to re-type-check it. C++20 modules should help with the first part, but they can't help with the second (and neither can concepts). Further, separate translation units can wind up duplicating the same instantiations, which the linker has to deduplicate.
When you export a generic function in Rust, by the time it gets pulled in somewhere else, it takes the form of pre-parsed, pre-type-checked MIR. It can also be pre-optimized, so type-independent optimization work is shared between instantiations. The compiler can also tell, before instantiation, which type parameters a function does not actually depend on, and essentially erase them ("polymorphization"). Further, Rust's compilation model reduces the redundant duplicate instantiations C++ does, both by using larger translation units and by automatically sharing any instantiations in dependencies with their dependents (though you can do this by hand in C++).
(Incidentally, these differences also apply to inline functions- in C++ you wind up putting their definitions in headers and recompiling them from scratch over and over; in Rust they are shared MIR form.)
> we can already see this by compiling build and proc macro crates with optimizations, it's just not the default and they still have to be compiled once.
I'm hopeful something like watt (https://github.com/dtolnay/watt) will land in Cargo that'll allow us to ship pre-compiled wasm blobs for proc-macros so we can just have sandboxed binaries.
I think the whole point is to prevent build scripts from doing arbitrary things. The sandbox should give access to the source code being built, record changes to these files (and/or new files generated in the same directories), and that's about it.
C++ compiles times are awful insomuch as you have to do the multiple times because the "template barf" makes finding root causes very challenging, esp with multiple problems.
Rust makes the problems easier to fix, IMHO. So, maybe even with same (or slightly longer) compile times, you'll hopefully have faster time to delivery.
In fact, in my experience, Rust has faster time to delivery than any other language I've used. It takes forever to compile, but I have so many fewer runtime bugs that have to be caught (hopefully) by testing, that it still comes out ahead, overall (again, for me and my various projects).
I also find write-time to not be as slow as others complain about, except when it comes to async/futures where it is, indeed, pretty rough. But, if I sit and think about how many times I have to flip back and forth between my code and some library code to try and guess what exceptions it may or may not throw in other languages or whether something could be null or not, I find that the dev times aren't so much better in these other languages as people sometimes claim.
Sure, if you're a fulltime JavaScript dev with 10 years of experience, you might remember things like that calling the Array constructor with 0 or >1 arguments creates an array with those values, but if you call it with exactly 1 number, it will create an empty array with that capacity. But, since I have to switch between many languages regularly, my time to delivery is significantly reduced by nonsense like that. Likewise, it's reduced by NPEs in Java, double-frees in C++, Kotlin's inane idea to use exceptions for errors and coroutine control-flow, etc, etc.
I just want to note that I fully agree that Rust is, ultimately, an extremely productive language. In my considerable experience with Rust it is the most productive language I have ever written code with professionally.
The fact that my only complaint is that compile times are slower than I'd like should be seen as high praise.
It’s not really that go is better designed for fast compilation - it is just a plain language where the compiler can just spit out vaguely optimized code, and call it a day.
Rust’s unique feature itself fundamentally depends on extensive static analysis. It’s not a design choice, it is pretty much what Rust is - a low-level language without a GC that is still memory safe. The price for that is hefty compile times.
> It’s not really that go is better designed for fast compilation
One of the explicit goals, by Go's creators, was fast build times. I still remember Rob Pike introducing Go during an all-hands at Google, where he talked about the very long build times for C++ and Java in Google's monorepo, and then showed some promising demos. (Most of us rolled our eyes at it then, because it was just a "hello world", but it's quite impressive how the language has evolved and remained true to its goals.)
> - it is just a plain language where the compiler can just spit out vaguely optimized code, and call it a day.
It's a simple language, but I wouldn't call it plain, nor characterize the optimizers that way.
It is not faster at compilation than Java, which was not particularly designed for such.
Also, as can be seen, go is not a well-designed language, having language warts we knew for 50 years. I would take the creators’ claims with a huge grain of salt.
But Java is inspired by Smalltalk, which is a late-binding language that defers most things to runtime. I believe in Java you can generate bytecode directly as you’re parsing the source file.
Java is compiled to bytecode, for later compilation to machine code at runtime (JIT). Go is compiled AOT, straight to machine code. It makes no sense to compare them.
Unless you meant that Java's AOT compilation is faster than Go's?
The parent comment explicitly mentioned that Java is slow at compilation, which is just false.
Also, there are single-pass compilers that produce machine code, they are not fundamentally slower than a byte code generator. Of course extensive optimizations will be more expensive.
I do think highly of Rob Pike and Ken Thompson for their IT work, but they are simply not good at language design, which just shows that PL design is quite unlike working on an OS.
Both statements, because unless otherwise qualified you're comparing apples to oranges when you say Java compiles as fast as Go. There's always going to be more overhead on running the Java bytecode on the JVM than there will be when running the native instructions generated by a compiler (even as "unoptimized" as Go is).
And someone that makes that assertion with a straight face without this caveat is not someone that should be dissing Rob Pike about language design.
Profiling the compilation process suggests that this isn't the case. Rust's higher level passes are rarely the dominant part of execution time.
Check out https://github.com/lqd/rustc-benchmarking-data/tree/main/res... and the other benchmarks in that repository for some data on how real world crates compilation times are spent. You'll find that backend code generation and optimization dominate most crates compile times. There are a few exceptions: particularly macro heavy crates, a couple crates with deeply nested types that hit some quadratic behavior in the compiler. But overall, the backend is still the largest piece.
The front end is time-consuming enough where replacing the backend with something lightweight like Go’s wouldn’t get you a 5-10x improvement, which is what I think you’d need to really move the needle on user perception. Moreover, a lot of the backend slowdown is due to front end choices monomophization which generates large amounts of intermediate code that must then be optimized away.
I doubt that a hypothetical version of Rust that avoided monomorphization would compile any faster. I remember doing experiments to that effect in the early days and found that monomorphization wasn't really slower. That's because all the runtime bookkeeping necessary to operate on value types generically adds up to a ton of code that has to be optimized away, and it ends up a wash in the end. As a point of comparison, Swift does all this bookkeeping, and it's not appreciably faster to compile than Rust; Swift goes this route for ABI stability reasons, not for compiler performance.
What you would need to go faster would be not only a non-monomorphizing compiler but also boxed types. That would be a very different language, one higher-level than even Go (which monomorphizes generics).
Just wanted to note Go does only a partial monomorphization, only monomorhpizes for gcshapes and not for all types. This severely limits the optimization potential and adds a runtime cost to dispatch, at least in its initial implementation.
Then there is an open niche for a “development mode”, that outputs barely optimized binaries with proper error handling, fast. (I do know about debug, etc).
It already exists: It's called “debug” mode and it's what you get when you don't compile it in release mode. (The biggest problem with debug mode is how slow the unoptimized code is: for back-end stuff it doesn't matter, but for things like gamedev you want your dependencies to be compiled in release mode (fortunately the cargo allows you to specify that you want some deps to be compiled in release mode even when your project is compiled in debug mode).
This is a HN thread about a blog post about how compile times have become dramatically better thanks to newly introduced parallelism in an area that was completely single threaded.
> However, at this point the compiler has been heavily optimized and new improvements are hard to find. There is no low-hanging fruit remaining. But there is one piece of large but high-hanging fruit: parallelism.
From discussions I've seen, there's not much high-hanging fruit left either, short of rewriting the entire compiler for better incremental compilation.
I think if you're talking about the compiler getting faster at what it does today, how it does it today, that's true. But that's a heavy constraint. If we got support for binary dependencies, that wouldn't be a compiler optimization in the same sense as parallelism is, but it would radically improve compile times for the average project.
Yeah, but binary dependencies or watt-style precompiled macros aren't going to get improve the build times people really care about, incremental build times. The parallel frontend is the plausibly the last major improvement we'll see on that front for years.
Incremental matters more than clean build times because (A) you're likely to do a lot more of them (B) they break developer flow more than waiting on CI does (C) at least in theory, you can always add more cores to your CI and get reasonable speedups, less so for incremental.
> Yeah, but binary dependencies or watt-style precompiled macros aren't going to get improve the build times people really care about, incremental build times.
Why not? If I add a new struct with `#[derive(serde::Serialize)]` I'll benefit from serde being compiled with optimizations.
> they break developer flow more than waiting on CI does
It might not get 10x better, but 3x isn't outside the realm of possibility. Just swapping the LLVM backend for cranelift can cut compile times in half.
The low-hanging fruit is gone but there are lots of hard but likely-significant improvements left on the table.
Rust-analyzer is even more of a resource hog than rustc itself. Not sure how directly applicable this work might be, but hopefully we'll see big improvements there as well. It's something that's clearly needed for state-of-the-art IDE support.
That’s interesting/surprising. I remember this in the Eclipse days, and it was often attributed to Java’s allocation-heavy style, garbage collection, and lack of value types
Also Java style of tiny classes and tiny files.
It’s an issue on both the implementation side and the thing-being-implemented
I would have thought Rust would be better on both fronts.
How many lines are the Rust codebases and their dependencies?
To be fair, I don't care. RA is extremely valuable, and I am _not_ one of the people who think 16GB of RAM and 250GB of SSD is good for a programmer machine.
I really disagree here: I'm a maintainer of a medium sized open source Rust project [1] and I'm always surprised by Rust compilation time speed (local). On a MacBook Pro, it's a matter of seconds, in debug. Release compilation and CI/CD are slower, but, since the beginning of my Rust journey (2 years ago), Rust compilation seems just very fa...
To balance / explain my point:
- my day work is Java / Kotlin with Gradle. Now, we can talk about glacial compilation times
- on my open source Rust project, we try to minimise dependencies, don't use macros (apart derive[Debug, Clone] etc...), and have a very moderate generics usage
If you take the time to `cargo build` my project, I'll be happy to have feedbacks on compilation times
- with `cargo tree`, I see that the project depends on ~600 crates
In my toy project:
- cloc shows that there are ~40,000 lines of Rust
- with `cargo tree`, I see ~40 crates
I don't know the scope of grapl, but 600 (transitives) crates seems a lot to me. Maybe that explains why this particular build is so long. I haven't managed to build it (seems to have prerequisites on proto buffer stuff).
Yes, it'll require a protoc installation to actually compile, as well as some native dependencies.
Naturally more crates means more time on compilation. Grapl is a pretty large project, lots of services that do different things, so it isn't too surprising that it has a lot of dependencies relative to what I assume is a more tightly scoped project.
For example, Grapl talks to multiple different databases, AWS services, speaks HTTP + JSON and gRPC (with protobuf), has a cli, etc etc.
As someone who has only done small projects in Rust, I'm curious how many LoC are we're talking? And were you splitting your project into crates where it made sense?
I wouldn't be surprised if Rust/Cargo does more disk IO than other build tools, though. Rust does a lot of compile time code gen and caches a lot of stuff on disk.
You're right that some slowdown is expected, but for me personally I hadn't realized how bad this particular FS was, nor had I expected how much it impacted build times
We split each service into crates, plus some libraries. There were some native dependencies as well, which could really impact compile times, as well as some codegen for things like protobuf.
Depends on the code and whether you are doing a release or debug build. I work on a very large rust project (~1m LoC) with a lot of dependencies. We've split it into multiple crates and the compile times don't really frustrate my dev workflow (incremental compilation works well and debug builds are pretty fast anyway). But building in our CI pipeline where we do a fully optimized build (single codegen unit, LTO enabled) it takes a while (~30m) which is annoying when you are waiting for a hotfix to be ready. It's also incredibly resource intensive (mainly linking with LTO enabled) so we've been the bane of our platform teams existence since we need something like 50GB of memory in our build container to do a full release build :)
It was over a year ago so it's a bit hard to recall... Maybe 20 minutes clean? 1 or 2 minutes for cached. Things have probably improved since then but idk. We did stuff like protobuf, we had a few proc-macros of our own, plenty of serde, and I think ~3 native dependencies (zstd, librdkafka, something else I don't recall). The native dependencies could be brutal, it caused long serial stalls, if I recall correctly. Linking took up a lot of time as well, but I forget why we didn't use mold, there was a reason at the time... but, again, over a year ago.
We did our builds in docker, for various reasons. So we relied on the docker buildx cache and some other tricks that I don't recall because I didn't work a ton on the build system.
I’m continuously amazed that this opinion is so prevalent. I maintain both a C++ and a Rust project, and incremental compile times on Rust are vastly better. It is, I believe, one of the fastest statically typed compiled languages. Go is faster, but I think that’s it.
async-stripe takes over two minutes to build due to codegen. We're considering switching to dolladollabills.
Our core API server takes a minute to build, and we have about a dozen services and command line apps, a bunch of little shared library crates, two desktop apps, and a Bevy app.
Our Github actions docker build takes ~10 minutes if you don't include the tests, but we're starting to shave off more time. (Our monorepo is 105589 Rust LOC total)
> async-stripe takes over two minutes to build due to codegen. We're considering switching to dolladollabills.
Ooohh interesting. We also use async-stripe, definitely going to have to check out dolladollabills though. Also in the Rust monorepo camp, our proof release takes ~5 mins from clean, tests are about 6 mins. We’ve invested a bit of effort getting our build-time down: don’t build in a docker container, we just copy the final artefact in-this wiped the most time of our builds. More parallel codegen units too.
I never understood the "gotta clean cache every time" with CI/CD. I'm sure it makes sense sometimes but you can make a compromise. I worked at two places where I only clean up our c++ caches on the build systems on the weekend. We did that "just in case" caching was hiding a problem, but would only have a "small 1 week" set back at most. We were not on heavy release cycles though so we could afford that. We never had a single problem traced back to the cache or build hiding something. This was for internal company software. Not sure why people are willing to pay the cost for a full rebuild every time if that full rebuild takes a long time (rust or c++). I'm sure there are cases for it though, just that it doesn't have to be done everywhere.
It's not that you have to, it's that you have many different builds that are going to stomp on each other's caches, plus your build services are often ephemeral - especially since I was at a small startup where we wanted to shut systems down overnight to keep the money.
Stupid question: does the back-end have to wait for the front-end to do borrow checking? If so, why?
(I'm not suggesting that it's doing anything wrong. I'm just wondering if borrow checking establishes invariants that the back-end depends on for more than correctness, such that you couldn't do speculative back-end work that you would discard on a borrow checking error.)
Well, there is mrustc[0], a Rust compiler that doesn't include a borrow-checker, so it's possible to compile (at least some versions of) Rust without a borrow checker, though it might not result in the most optimized code.
AFAIK there are some optimization like the infamous `noalias` optimization (which took several tries to get turned on[1]) that uses information established during borrow checking.
I'm also not sure what the relation with NLL (non-lexical lifetimes) is, where I would assume you would need at least a primitive borrow-checker to establish some information that the backend might be interested in. Then again, mrustc compiles Rust versions that have NLL features without a borrow-checker, so it's again probably more on the optimization side than being essential.
The borrow checker determines the set of valid programs.
mrustc gets past not having a borrow checker by compiling invalid programs that would have otherwise had compiler errors. This, at best, results in runtime segfaults etc.
Just like you don't need to validate syntax if you just assume you're only ever fed valid syntax.
Nothing wrong with that, just want to clarify for people.
Keep in mind, this is still an experiment and is restricted to nightly, so its not configured for general use.
Id expect the stabilized default to be number of cores. No idea where this effort is at but at one point they were going to use jobserver to coordinate across cargo's rustc invokations at which point cargo's job count will be used which defaults to number of cores (and supports counting down from that with negative numbers.
Because it uses the jobserver protocol which Cargo initializes to the number of cores by default, I'd imagine you could set the new flag to some unreasonably high number (e.g. 10000) and it should limit usage to free cores.
Hooray! I used rust eons ago when even toy examples were fairly slow to compile and after coming back recently I've started to really love rust and have been using it when ever possible without even thinking about compile times, but I do have one project that has grown a bit and I started getting deja vu when it takes 5+ seconds to compile a simple change I start thinking about not saving things to trigger my analyzer until I work out some other things to not try to work them out while my laptop turns into an aircraft engine.
Very exciting as this is the one pain point for me personally so any and all progress is much appreciated
Nice! One thing I have noticed is that, unlike the library crate ecosystem, my binary crates by default would be large and monolithic (I now divide into multiple library crates). This means toward the tail end of compilation not only can compilation not be parallelized, but also that the largest crates tend to serialized, so this is a very welcome change!
I've been away from doing Rust semi-actively for a few years, and have been working in other environments like python and Typescript. Now I tried it for a project for a while again and the compilation speed is pretty much instant. It's always great when things get better, but things are pretty damned good already.
Also, these days it's possible to use cheat codes aka ChatGPT to flounder through almost all the difficult Rust problems that might have been show stoppers a few years ago. It's looking pretty great on that side of the fence.
My compilation times are otherwise great except when I make cross-arch Docker images in GitHub Actions. Then I'm seeing Docker image builds take 60 to 90 minutes. Has made me quite aware of just how many dependencies my projects have in total.
If you are using buildx+QEMU for compiling your much better off cross-compiling inside the native architecture of github actions and the export the result to a build step that emulates the architecture:
I went form cross compilation that took 2 hours to 10 minutes total
You can follow my Dockerfile of my project as an example on how to do it
My project isn't large or even mid-sized, but it has over a hundred dependencies. Building the dependencies certainly takes some time on my raspberry pi 4, but after that initial hit, every change to the project builds a release in about 15 seconds, and a debug build in about 10.
And on my Macbook Air M2, where I actually develop, these things happen fast enough to call them instant. Perhaps I'm a bit spoiled there due to the excellent hardware. As a comparison, a Typescript project I'm working on using a more powerful Macbook always takes about 5-10 seconds to build.
I don't doubt that actually large Rust projects take a long time to build, though, but even these small and mid-sized were rather slow to build a few years ago.
To clarify, dependencies significantly affect incremental builds too. Seems loading information about compiled dependencies into the compiler and/or resolving stuff about them can take significant time.
I know you said 4s is good but have you tried changing the linker?
Number of dependencies likely won't affect incremental build times except for linking and replacing it might offer some good gains for incremental builds.
Is there any way I can disable the parallel compiler option without rebuilding the compiler? I don't need to use it (I have codegen units set to 1 anyway), and it's possible it's causing an ICE that I don't want to debug. To be clear, I know it's set to 1 thread by default, but I want it all the way off.
Come on, could we get minimal bootstraping rust compiler instead of 350MB on x86_64 linux? Namely, the rust-written compiler executable to generate simple .o ELF object and that's it. And bootstraping: namely a static PIE ELF executable.
Rust has the opportunity to be serious, not lost like gcc.
procmacros are loaded as dylibs, so dynamic linking is necessary.
if you want a bootstrap procedure take a look at what guix does, they go through mrustc.
I don't want all that kludge, a simple machine code generator, a rust-written rust compiler, on x86_64 linux, a static PIE ELF executable. How hard can it be?
I give it a rust compiling unit and it outputs a ELF relocatable object.
What I find most impressive about Rust is the marketing.
It's not the language itself. It's not the safety and other attributes. And it certainly can't be the adoption (currently low % according to StackOverflow [0]).
It's how hyped Rust is. It's how effusive every blurb and sound bite seems to be. Glowing articles frequently make the front page of HN. Famous for penetration into Linux kernel development. What is this mechanism? Who is behind it? Is this veritable storm of hitting me on the head with Rust-this-Rust-that coordinated behind the scenes by some powerful entity? A hyperactive grassroots cheerleader squad? Does it infect C/C++ programmers who've dared to sample it once, turning them into noisy advocates, a la addictive drugs or parasitic fungi? Is Rust merely the It-Thing at the moment that people are mimetically/socially driven to latch onto?
We didn't see this with Lua, Ruby (that was mainly RoR anyways), Python, Swift, C#, certainly not newer-spec C and C++, or any of the others, even Java back in the day.
I don't know Rust, maybe it's deserving of the adulation. But I gotta say, the Rust marketing machine is one of the most superlative campaigns in IT I've ever seen.
Footnote: Some folk seem to be taking offense at my question where none is offered. I'm not for or against Rust, merely ambivalent & curious. Seen many of these waves come through, Rust is definitely the current wave, and the biggest so far! That is something I wish to learn from.
It's simple - the praise is predicated not upon Rust being good, but how comparatively shit everything else in the same category is: C, C++, higher level languages for system programming (Go is extremely inadequate, and slow).
Rust gets a lot, a lot of small things right. The things you usually use as an excuse as to why one language or another is better - I found Rust does much more of them in a good way. In most other languages you can have let's say good package management but not fast iterator expressions, or you have compile-time iterator expressions but they are ass to write and package management does not exist, or you have both but all other features are missing, and etc.
Arguably, because Rust is also verbose and sometimes a bit ceremony-heavy, it's not a perfect language, which is why I use C# daily (which is similar and familiar enough with the tooling, package management and critical features like generics and async). But when I need lean and mean applications, there is simply no reason to pick anything but Rust except maybe out of curiosity.
Thanks neonsunset -- so there is this relativistic argument to be made in favor of Rust.
Previously, I'd been involved in writing up samples of a CLI tool in each of Golang, Ruby, Haskell, C++, Python, Java, Clojure. From there we would select one language ecosystem to marry ourselves to and move forward. Every single one left us wanting in terms of either team capability/emotional levels, language expressiveness, distribution, speed, tooling, package management, etc. And I learned here that Rust seems to have each of these down pat to satisfactory degrees.
Next time I'm faced with birthing yet another CLI tool, I thinks I'm gonna try Rust first.
With rust I can sleep at night. It's sooooo much easier to trust a compiler that is THAT pedantic and makes it its business to not let you do stupid things than something that can't even show if you will hit a nullptr.
> We didn't see this with Lua, Ruby (that was mainly RoR anyways), Python, Swift, C#, certainly not newer-spec C and C++, or any of the others, even Java back in the day.
I think this is an error of perspective. The hype for Python and Ruby in the mid-2000s was off the charts. And the corporate marketing blitz for Java in the 90s was so beyond extreme that it will never be replicated in the space of programming languages.
Rust has zero marketing budget. The vocal adulation is a result of a confluence of coincidental factors that will never be replicated: an open-source, volunteer-run project from perhaps the only company with the proper combination of funding, technical chops, and open-source cachet to pull it off; an industry landscape that has so long rejected some of the best ideas from academic languages (e.g. tagged unions and pattern matching) that any language who can successfully express them to a non-academic audience will be seen as visionary; a specific niche (safe systems programming) whose exemplar never took off in the realm of FOSS, and who has failed to appeal to most segments of industry for whatever reason, with a ripe potential audience eager for a modern champion; slow-moving competitors in the systems space who had become complacent from lack of competition, and who are prevented from effectively competing at the safety niche without breaking backwards compatibility; a relatively friendly production-ready compiler backend in LLVM that suddenly makes competing in the high-performance cross-platform systems space at all feasible; an audience of newly-minted web devs looking to dip their toes into the systems space without needing to offer the traditional pound of flesh; a focus on standardized tooling that makes onboarding easy and going back to other languages painful; and a totally fortuitous, somewhat accidental, fairly brilliant realization that safe systems programming could be possible in the first place, thanks to a novel combination of affine types and region-based memory management, that worked so well that it took even the creators by surprise. Rust is lightning in a bottle.
That wasn't just corporate marketing; Java was the first memory safe language to be widely used in enterprise code, and it led to a lot of C/C++ code being rewritten to address the issue of memory safety bugs. The same may happen with Rust, or perhaps C++ will add facilities for a memory safe subset of the language.
I want to divorce the marketing from the implementation for a second here; Java was widely used in the enterprise as a C++ replacement because of the marketing blitz, not really because of its technical prowess (with the benefit of hindsight, it's safe to say that Java's marketing vastly overpromised what Java could actually deliver for the first decade or so of the language's life). Note that PHP was the beneficiary of a similar (though smaller) enterprise marketing blitz, and PHP took far longer to get its act together than Java did; these companies were adopting based on what they read in magazines and saw on TV (yes, there were TV commercials for Java!).
> Does it infect C/C++ programmers who've dared to sample it once
Yes, that is a big part of it.
From my very subjective impression, having attended many Rust meetups with a constant influx of newcomers, I would say the two biggest groups that are really longing for Rust are:
- C (and sometimes C++) programmers that are looking for a breath of fresh air with modern tooling (e.g. package management) that isn't the result of decades of patchwork upon patchwork
- People that would like to work "close to the metal", but in the past were too tormeted by C/C++/Go segfaults(/other memory issues) to approach the subject. (That's also the group that I fall in)
> We didn't see this with Lua, Ruby (that was mainly RoR anyways), Python, Swift, C#, certainly not newer-spec C and C++, or any of the others, even Java back in the day.
I'm pretty sure we saw a similar hype with Ruby. If you go back ~10 years in the HN archives, you will see about as many "... in Ruby" posts as you see today with Rust. All the other languages listed are too old (I would guess "too old" means predating widespread social media), or have something obvious that alienates a big chunk of developers (e.g. Swift and .NET languages being essentially single-OS languages).
> A hyperactive grassroots cheerleader squad?
If anything the opposite. In the early days of Rust there existed the self-aware inside joke of the "Rust Evangelism Strike Force". Once people actually tried to meme to much with that (e.g. brigading subreddits), that was strongly rejected from inside the "community".
I'll echo your point about loving programming in Rust. I've programmed and continue to program in other languages (Java, PHP, Go), but nothing gives me the same joy as programming in Rust. I know it'll run the first time and likely work correctly as well. And what's more, it'll be faster than any code I could have written in another language.
Not everyone will feel this way, certainly. They, like GP, might come to this bizarre conclusion that noone could like Rust this much, and therefore there is a shadowy cabal promoting Rust for unknown reasons. I don't think there's much we can say to convince them otherwise.
But one thing I do when I see folks complaining that they've never seen such promotion on HN ever. I click on their profile to check how old their account is. Invariably it's 2016 or later. Which means they never saw the cycles of Ruby, JS and Go promotion. This will come and go. In a few years we'll be complaining about, I don't know, the relentless promotion of Mojo.
| I click on their profile to check how old their account is. Invariably it's 2016 or later. Which means they never saw the cycles of Ruby, JS and Go promotion.
Provided that was the account they've been using since they started on HN ;)
Not that I really paid attention during those other waves but weren't they justified? Sure in today's time you might have a bunch of alternatives you prefer to do but it wouldn't surprise me if somebody made a new language 20 years in the future based on Rust but without XYZ mistakes.
i.e. JS allow webpages to do stuff without needing to reload the page every time. It basically lets you make mods for the browser! Like JQuery was really useful at the time and not now because everybody browser implemented their methods.
i.e. Ruby (Rails), IIUC it was super easy to do CRUD sites without spending as much time on the tedious plumbing/infrastructure stuff.
I do use rust professionally, and I also love it! I’m 2.5 years in, and I would be sad if I had to switch jobs and mostly use a different language.
Nothing is perfect, but Rust is definitely more pleasant to work with (for me at least) than C++, JS, TS, Python, Go, or anything else I’m likely to get a job using. I do think it’d be fun to work in a Clojure or Elixir shop, but I’m still hobby-level on those, so probably it’s just the perception of green grass.
People really want Rust to be popular because people really like writing Rust. There is no campaign, no background force. There's some intelligence to it - people know to post something like this on a Friday morning, and not a Thursday night, but that's it.
You're just seeing a project with genuine excitement behind it.
> Does it infect C/C++ programmers who've dared to sample it once, turning them into noisy advocates, a la addictive drugs or parasitic fungi?
It definitely did that to me. I remember trying out Rust and was amazed at how much abuse I'd put up with from C++ for all these years. Now I just want to try out Rust in a large enterprise project to see if it will just be replaced with a different kind of abuse...
I guess I'm not sure what the "marketing machine" is here. The Rust team published an article _on their own blog_ and someone posted here on HN, then people discuss it if they're interested.
I mean every language has some amount of "marketing" - people speaking at conferences, etc., but it's not like you're getting product advertising here in the form of commercials on TV or advertisements on the side of web pages. I think what you're seeing here is genuine interest.
Unless you're implying that HN itself has some sort of algorithm bias to push Rust posts to the top?
It seems you are not the type to run out and adopt the shiny new thing on day one. I commend you for this. You haven't bought a hydrogen car yet? That's ok.
You haven't tried a cordless drill? Well, alright. You don't have to. You can survive without one. But rather than ask others what the big deal is at this point, why don't you just try one the next time you have to screw a picture to the wall. Borrow one if you don't want to commit to owning. Figure out for yourself whether having a charger and having to swap batteries is worth it. Maybe you'll decide it isn't. Maybe you'll be disgusted that you have to buy a new drill every 10 years while your 40 year old corded model still works fine.
But don't be too shocked if everybody else makes the leap. Sometimes the shiny new fad really is the future.
And you can still keep your corded drill around for those times when it really is the better tool for the job.
People don't know what they want well enough to specify it. However, if you built something they like, they will recognise that even if they struggle to articulate why it's good.
I don't think that's part of marketing, unless you'd see for example a furniture company deciding to use a higher quality wood for their new tables as "marketing" because the tables will be nicer and customers will like that.
new systems languages are much more rare than new scripting languages, and the focus on usability of the surrounding toolchain makes it a particular darling for anyone who touches it.
The other arguments (memory safety et al.) are sort of on the side imo; I really enjoy writing, reading and running rust because the developer experience is just so solid.
When I say "developer experience" I mean the crates system and cargo, not necessarily the language itself (which I find a bit ugly to be frank).
> new systems languages are much more rare than new scripting languages
No, and have never been. C and later C++ are (were) just _that_ prominent, that most people never heard of the alternatives (except for Pascal and/or ADA, Objective-C and maybe D).
Nowadays there is Zig (most people have heard about that, I guess), Carbon, Cppfront, Odin, Jai, Vale, Austral (and some more I've forgotten about).
For your 7 systems languages I can name 150 new scripting languages.
Objective-C requires a runtime, it's not a systems language because of that, I think Pascal is also requiring a runtime but that's not important right now.
D is a great example, I tried very hard to make D work and it just didn't.
I was even trying to use D-Langs own mailing list frontend (written in D) and it was nearly impossible for an outsider.
Contrast that to Rust, and aside from picking the right toolchain (nightly or stable); "cargo build" after following the quickstart on the internet will build practically all rust projects. With the minor caveat that if there are any bindings to system libraries then you need those too (libssl-dev for debian being a common requirement for openssl-sys for example).
> For your 7 systems languages I can name 150 new scripting languages.
Please don't take that as an offence, but please name at least 5 or 10 (which aren't "just" compiled to JS, I've actually forgotten about them ;). I've only heard about Elixir and Verse (yes, because of SPJ) which actually are somewhat used "in the wild".
But my point is exactly that Rust _is_ really an outsider by being significantly better than the (many or not so many does not matter that much ;) alternatives to C++.
I can't name 150, but since C++ was released in the 80's there has been: JavaScript itself, Java, C#, PHP, Python, Ruby, Go, Swift, Kotlin, and many other very popular languages in the "higher level programming languages" category. And only really Rust has reached a comparable level of popularity in the "systems level language" category (with perhaps D and Zig in the next "tier").
A lot of those are even older than C++. And of your list, only Objective-C has seen adoption on the scale of Rust, and it both predates C++ and requires a much heavier runtime than C/C++/Rust.
For a very long time there has been no viable alternative in the strictly-no-GC space. Every new systems-adjacent language concluded "GC is fine for 99% of programs" (which is true), and then excluded themselves from the no-GC niche. Rust almost did the same thing — early design assumed per-thread GC.
I think a lot of people are stuck with C++ due to the fact that there are a lot of legacy C++ codebases (many decades old), moreso than legacy Python or Javascript codebases.
Rust does a ton of things better than C++ as other people here are mentioning. For example, at my 20-man C++ shop, we have around 2 people's worth of full-time cmake work, that is, just maintaining the build system. This work would largely go away if it was a Rust codebase.
Something I think helps it, is it has the allure of re-writing a project from scratch, thinking about your mistakes ahead of time, and lacking most of the technical debt.
This isn't to say it lacks any technical debt, but that the language feels very transparent and thoughtful. Contrary to say, a language built by a big company(.NET, Go) or built by strong opinions(Python). Rust in comparison feels modest in its delivery but ambitious in its design.
Its terrible reasoning for picking a language to write a project in, but Rust feels trustworthy I think.
> Is Rust merely the It-Thing at the moment that people are mimetically/socially driven to latch onto?
I think so. Sociological phenomenon initially bootstrapped by a small number of "influencers". Same with golang 10 years back. Same with frameworks (angular, react). Do you remember the hype around "XML revolution" around 2000? It was arguably bigger than rust's.
Somebody has to write a book on the history of software cults - it will make a fun read. :-)
I'm not sure this is an interesting direction. Isn't Rust compilation already highly-parallel at the file level? Sure if a single file compiles faster that nice. But won't this steal resources from the top-level file parallelism? I find it quite concerning that there are no numbers given for that.
Today, Rust compilation is parallel in the front-end at the crate level, not file or even module level. Large crates will therefore not benefit from parallelism, and splitting a large crate into smaller ones comes with its own costs too.
This, however, doesn't apply to the code generation phase with LLVM which is already parallelizable at the codegen unit level (the number of codegen units is configurable), but that's the “back-end”, whereas this new parallelization applies to the “front-end” of the compiler.
> Today, Rust compilation is parallel in the front-end at the crate level, not file or even module level. Large crates will therefore not benefit from parallelism,
Sort of, it's managed at the crate level and not file or module indeed, but the compiler then splits crates into smaller chunks called “codegen units”.
> This flag controls the maximum number of code generation units the crate is split into. […] When a crate is split into multiple codegen units, LLVM is able to process them in parallel. […]
The default value is 16 for non-incremental builds. For incremental builds the default is 256 which allows caching to be more granular.
> This, however, doesn't apply to the code generation phase with LLVM which is already parallelizable at the codegen unit level (the number of codegen units is configurable), but that's the “back-end”, whereas this new parallelization applies to the “front-end” of the compiler.
It’s not. Rust compilation is currently parallel at the _crate_ level (i.e. one crate is one translation unit). Speeding up compilation of large crates could lead to nice speedups.
> But won't this steal resources from the top-level file parallelism?
It won’t. rustc uses jobserver protocol to coordinate parallelism with cargo, so the total amount of threads doing compilation of the whole project doesn’t exceed CPU count.
Anyone who has tried to compile Rust projects on a system with many cores while running htop could tell you that it spends a whole lot of time using only a few cores. Remember, Rust is not like C where the interface definitions (header files) exist on disk so all files can be compiled completely independently.
Glad to see this progress.