pron 9 hours ago

Yes!

To me, the uniqueness of Zig's comptime is a combination of two things:

1. comtpime replaces many other features that would be specialised in other languages with or without rich compile-time (or runtime) metaprogramming, and

2. comptime is referentially transparent [1], that makes it strictly "weaker" than AST macros, but simpler to understand; what's surprising is just how capable you can be with a comptime mechanism with access to introspection yet without the referentially opaque power of macros.

These two give Zig a unique combination of simplicity and power. We're used to seeing things like that in Scheme and other Lisps, but the approach in Zig is very different. The outcome isn't as general as in Lisp, but it's powerful enough while keeping code easier to understand.

You can like it or not, but it is very interesting and very novel (the novelty isn't in the feature itself, but in the place it has in the language). Languages with a novel design and approach that you can learn in a couple of days are quite rare.

[1]: In short, this means that you get no access to names or expressions, only the values they yield.

  • paldepind2 8 hours ago

    I was a bit confused by the remark that comptime is referentially transparent. I'm familiar with the term as it's used in functional programming to mean that an expression can be replaced by its value (stemming from it having no side-effects). However, from a quick search I found an old related comment by you [1] that clarified this for me.

    If I understand correctly you're using the term in a different (perhaps more correct/original?) sense where it roughly means that two expressions with the same meaning/denotation can be substituted for each other without changing the meaning/denotation of the surrounding program. This property is broken by macros. A macro in Rust, for instance, can distinguish between `1 + 1` and `2`. The comptime system in Zig in contrast does not break this property as it only allows one to inspect values and not un-evaluated ASTs.

    [1]: https://news.ycombinator.com/item?id=36154447

    • pron 5 hours ago

      Yes, I am using the term more correctly (or at least more generally), although the way it's used in functional programming is a special case. A referentially transparent term is one whose sub-terms can be replaced by their references without changing the reference of the term as a whole. A functional programming language is simply one where all references are values or "objects" in the programming language itself.

      The expression `i++` in C is not a value in C (although it is a "value" in some semantic descriptions of C), yet a C expression that contains `i++` and cannot distinguish between `i++` and any other C operation that increments i by 1, is referentially transparent, which is pretty much all C expressions except for those involving C macros.

      Macros are not referentially transparent because they can distinguish between, say, a variable whose name is `foo` and is equal to 3 and a variable whose name is `bar` and is equal to 3. In other words, their outcome may differ not just by what is being referenced (3) but also by how it's referenced (`foo` or `bar`), hence they're referentially opaque.

    • deredede 6 hours ago

      Those are equivalent, I think. If you can replace an expression by its value, any two expressions with the same value are indistinguishable (and conversely a value is an expression which is its own value).

  • WalterBright 4 hours ago

    It's not novel. D pioneered compile time function execution (CTFE) back around 2007. The idea has since been adopted in many other languages, like C++.

    One thing it is used for is generating string literals, which then can be fed to the compiler. This takes the place of macros.

    CTFE is one of D's most popular and loved features.

    • msteffen an hour ago

      If I understand TFA correctly, the author claims that D’s approach is actually different: https://matklad.github.io/2025/04/19/things-zig-comptime-won...

      “In contrast, there’s absolutely no facility for dynamic source code generation in Zig. You just can’t do that, the feature isn’t! [sic]

      Zig has a completely different feature, partial evaluation/specialization, which, none the less, is enough to cover most of use-cases for dynamic code generation.”

    • az09mugen an hour ago

      A little bit out of context, I just want to thank you and all the contributors for the D programming language.

  • cannabis_sam 5 hours ago

    Regarding 2. How are comptime values restricted to total computations? Is it just by the fact that the compiler actually finished, or are there any restrictions on comptime evaluations?

    • pron 5 hours ago

      They don't need to be restricted to total computation to be referentially transparent. Non-termination is also a reference.

  • keybored 4 hours ago

    I’ve never managed to understand your year-long[1] manic praise over this feature. Given that you’re a language implementer.

    It’s very cool to be able to just say “Y is just X”. You know in a museum. Or at a distance. Not necessarily as something you have to work with daily. Because I would rather take something ranging from Java’s interface to Haskell’s typeclasses since once implemented, they’ll just work. With comptime types, according to what I’ve read, you’ll have to bring your T to the comptime and find out right then and there if it will work. Without enough foresight it might not.

    That’s not something I want. I just want generics or parametric polymorphism or whatever it is to work once it compiles. If there’s a <T> I want to slot in T without any surprises. And whether Y is just X is a very distant priority at that point. Another distant priority is if generics and whatever else is all just X undernea... I mean just let me use the language declaratively.

    I felt like I was on the idealistic end of the spectrum when I saw you criticizing other languages that are not installed on 3 billion devices as too academic.[2] Now I’m not so sure?

    [1] https://news.ycombinator.com/item?id=24292760

    [2] But does Scala technically count since it’s on the JVM though?

    • ww520 2 hours ago

      I'm sorry but I don't understand what you're complaining about comptime. All the stuff you said you wanted to work (generic, parametric polymorphism, slotting <T>, etc) just work with comptime. People are praising about comptime because it's a simple mechanism that replacing many features in other languages that require separate language features. Comptime is very simple and natural to use. It can just float with your day to day programming without much fuss.

  • User23 8 hours ago

    Has anyone grafted Zig style macros into Common Lisp?

    • Conscat 7 hours ago

      The Scopes language might be similar to what you're asking about. Its notion of "spices" which complement the "sugars" feature is a similar kind of constant evaluation. It's not a Common Lisp dialect, though, but it is sexp based.

    • toxik 8 hours ago

      Isn’t this kind of thing sort of the default thing in Lisp? Code is data so you can transform it.

      • TinkersW 2 hours ago

        Lisp is so powerful, but without static types you can't even do basic stuff like overloading, and have to invent a way to even check the type(for custom types) so you can branch on type.

        • dokyun an hour ago

          > Lisp is so powerful, but <tired old shit from someone who's never used Lisp>.

          You use defmethod for overloading. Types check themselves.

      • fn-mote 6 hours ago

        There are no limitations on the transformations in lisp. That can make macros very hard to understand. And hard for later program transformers to deal with.

        The innovation in Zig is the restrictions that limit the power of macros.

    • Zambyte 8 hours ago

      There isn't really as clear of a distinction between "runtime" and "compile time" in Lisp. The comptime keyword is essentially just the opposite of quote in Lisp. Instead of using comptime to say what should be evaluated early, you use quote to say what should be evaluated later. Adding comptime to Lisp would be weird (though obviously not impossible, because it's Lisp), because that is essentially the default for expressions.

      • Conscat 7 hours ago

        The truth of this varies between Lisp based languages.

hiccuphippo 10 hours ago

The quote in Spanish about a Norse god is from a story by Jorge Luis Borges, here's an English translation: https://biblioklept.org/2019/04/02/the-disk-a-very-short-sto...

  • Validark 2 hours ago

    The story is indeed very short, but hits hard. Odin reveals himself and his mystical disc that he states makes him king as long as he holds it. The Christian hermit (by circumstance) who had previously received him told him he didn't worship Him, that he worshiped Christ instead, and then murdered him for the disc in the hopes he could sell it for a bunch of money. He dumped Odin's body in the river and never found the disc. The man hated Odin to this day for not just handing over the disc to him.

    I wonder if there's some message in here. As a modern American reader, if I believed the story was contemporary, I'd think it's making a point about Christianity substituting honor for destructive greed. That a descendant of the wolves of Odin would worship a Hebrew instead and kill him for a bit of money is quite sad, but I don't think it an inaccurate characterization. There's also the element of resentment towards Odin for not just handing over monetary blessings. That's sad to me as well. Part of me hopes that one day Odin isn't held in such contempt.

  • kruuuder 7 hours ago

    If you have read the story and, like me, are still wondering which part of the story is the quote at the top of the post:

    "It's Odin's Disc. It has only one side. Nothing else on Earth has only one side."

    • tines 4 hours ago

      A mobius strip does!

  • _emacsomancer_ 8 hours ago

    And in Spanish here: https://www.poeticous.com/borges/el-disco?locale=es

    (Not having much Spanish, I at first thought "Odin's disco(teque)" and then "no, that doesn't make sense about sides", but then, surely primed by English "disco", thought "it must mean Odin's record/lp/album".)

    • wiml 8 hours ago

      Odin's records have no B-sides, because everything Odin writes is fire!

      • tialaramex 5 hours ago

        Back when things really had A and B sides, it was moderately common for big artists to release a "Double A" in which both titles were heavily promoted, e.g. Nirvana's "All Apologies" and "Rape Me" are a double A, the Beatles "Penny Lane" and "Strawberry Fields Forever" likewise.

ephaeton 8 hours ago

zig's comptime has some (objectively: debatable? subjectively: definite) shortcomings that the zig community then overcomes with zig build to generate code-as-strings to be lateron @imported and compiled.

Practically, "zig build"-time-eval. As such there's another 'comptime' stage with more freedom, unlimited run-time (no @setEvalBranchQuota), can do IO (DB schema, network lookups, etc.) but you lose the freedom to generate zig types as values in the current compilation; instead of that you of course have the freedom to reduce->project from target compiled semantic back to input syntax down to string to enter your future compilation context again.

Back in the day, where I had to glue perl and tcl via C at one point in time, passing strings for perl generated through tcl is what this whole thing reminds me of. Sure it works. I'm not happy about it. There's _another_ "macro" stage that you can't even see in your code (it's just @import).

The zig community bewilders me at times with their love for lashing themselves. The sort of discussions which new sort of self-harm they'd love to enforce on everybody is borderline disturbing.

  • bsder 7 hours ago

    > The zig community bewilders me at times with their love for lashing themselves. The sort of discussions which new sort of self-harm they'd love to enforce on everybody is borderline disturbing.

    Personally, I find the idea that a compiler might be able to reach outside itself completely terrifying (Access the network or a database? Are you nuts?).

    That should be 100% the job of a build system.

    Now, you can certainly argue that generating a text file may or may not be the best way to reify the result back into the compiler. However, what the compiler gets and generates should be completely deterministic.

    • ephaeton 6 hours ago

      > Personally, I find the idea that a compiler might be able to reach outside itself completely terrifying (Access the network or a database? Are you nuts?).

      What is "itself" here, please? Access a static 'external' source? Access a dynamically generated 'external' source? If that file is generated in the build system / build process as derived information, would you put it under version control? If not, are you as nuts as I am?

      Some processes require sharp tools, and you can't always be afraid to handle one. If all you have is a blunt tool, well, you know how the saying goes for C++.

      > However, what the compiler gets and generates should be completely deterministic.

      The zig community treats 'zig build' as "the compile step", ergo what "the compiler" gets ultimately is decided "at compile, er, zig build time". What the compiler gets, i.e., what zig build generates within the same user-facing process, is not deterministic.

      Why would it be. Generating an interface is something that you want to be part of a streamline process. Appeasing C interfaces will be moving to a zig build-time multi-step process involving zig's 'translate-c' whose output you then import into your zig file. You think anybody is going to treat that output differently than from what you'd get from doing this invisibly at comptime (which, btw, is what practically happens now)?

      • bsder 2 hours ago

        > The zig community treats 'zig build' as "the compile step", ergo what "the compiler" gets ultimately is decided "at compile, er, zig build time". What the compiler gets, i.e., what zig build generates within the same user-facing process, is not deterministic.

        I know of no build system that is completely deterministic unless you go through the process of very explicitly pinning things. Whereas practically every compiler is deterministic (gcc, for example, would rebuild itself 3 times and compare the last two to make sure they were byte identical). Perhaps there needs to be "zigmeson" (work out and generate dependencies) and "zigninja" (just call compiler on static resources) to set things apart, but it doesn't change the fact that "zig build" dispatches to a "build system" and "zig"/"zig cc" dispatches to a "compiler".

        > Appeasing C interfaces will be moving to a zig build-time multi-step process involving zig's 'translate-c' whose output you then import into your zig file. You think anybody is going to treat that output differently than from what you'd get from doing this invisibly at comptime (which, btw, is what practically happens now)?

        That's a completely different issue, but it illustrates the problem perfectly.

        The problem is that @cImport() can be called from two different modules on the same file. What about if there are three? What about if they need different versions? What happens when a previous @cImport modifies how that file translates. How do you do link time optimization on that?

        This is exactly why your compiler needs to run on static resources that have already been resolved. I'm fine with my build system calling a SAT solver to work out a Gordian Knot of dependencies. I am not fine with my compiler needing to do that resolution.

    • panzi 4 hours ago

      > Personally, I find the idea that a compiler might be able to reach outside itself completely terrifying (Access the network or a database? Are you nuts?).

      Yeah, although so can build.rs or whatever you call in your Makefile. If something like cargo would have built-in sandboxing, that would be interesting.

    • bmacho 6 hours ago

      They are not advocating for IO in the compiler, but everything else that other languages can do with macros: run commands comptime, generate code, read code, modify code. It's proven to be very useful.

      • bsder 3 hours ago

        I'm going to make you defend that statement that they are "useful". I would counter than macros are "powerful".

        However, "macros" are a disaster to debug in every language that they appear. "comptime" sidesteps that because you can generally force it to run at runtime where your normal debugging mechanisms work just fine (returning a type being an exception).

        "Macros" generally impose extremely large cognitive overhead and making them hygienic has spawned the careers of countless CS professors. In addition, macros often impose significant compiler overhead (how many crates do Rust's proc-macros pull in?).

        It is not at all clear that the full power of general macros is worth the downstream grief that they cause (I also hold this position for a lot of compiler optimizations, but that's a rant for a different day).

    • forrestthewoods 3 hours ago

      > Personally, I find the idea that a compiler might be able to reach outside itself completely terrifying (Access the network or a database? Are you nuts?).

      It’s not the compiler per se.

      Let’s say you want a build system that is capable of generating code. Ok we can all agree that’s super common and not crazy.

      Wouldn’t it be great if the code that generated Zig code also be written in Zig? Why should codegen code be written in some completely unrelated language? Why should developers have to learn a brand new language to do compile time code Gen? Why yes Rust macros I’m staring angrily at you!

  • User23 8 hours ago

    Learning XS (maybe with Swig?) was a great way to actually understand Perl.

pyrolistical 10 hours ago

What makes comptime really interesting is how fluid it is as you work.

At some point you realize you need type information, so you just add it to your func params.

That bubbles all the way up and you are done. Or you realize in certain situation it is not possible to provide the type and you need to solve a arch/design issue.

  • Zambyte 8 hours ago

    If the type that you're passing as an argument is the type of another argument, you can keep the API simpler by just using @TypeOf(arg) internally in the function instead.

karmakaze 10 hours ago

> Zig’s comptime feature is most famous for what it can do: generics!, conditional compilation!, subtyping!, serialization!, ORM! That’s fascinating, but, to be fair, there’s a bunch of languages with quite powerful compile time evaluation capabilities that can do equivalent things.

I'm curious what are these other languages that can do these things? I read HN regularly but don't recall them. Or maybe that's including things like Java's annotation processing which is so clunky that I wouldn't classify them to be equivalent.

  • foobazgt 9 hours ago

    Yeah, I'm not a big fan of annotation processing either. It's simultaneously heavyweight and unwieldy, and yet doesn't do enough. You get all the annoyance of working with a full-blown AST, and none of the power that comes with being able to manipulate an AST.

    Annotations themselves are pretty great, and AFAIK, they are most widely used with reflection or bytecode rewriting instead. I get that the maintainers dislike macro-like capabilities, but the reality is that many of the nice libraries/facilities Java has (e.g. transparent spans), just aren't possible without AST-like modifications. So, the maintainers don't provide 1st class support for rewriting, and they hold their noses as popular libraries do it.

    Closely related, I'm pretty excited to muck with the new class file API that just went GA in 24 (https://openjdk.org/jeps/484). I don't have experience with it yet, but I have high hopes.

    • pron 5 hours ago

      Java's annotation processing is intentionally limited so that compiling with them cannot change the semantics of the Java language as defined by the Java Language Specification (JLS).

      Note that more intrusive changes -- including not only bytecode-rewriting agents, but also the use of those AST-modifying "libraries" (really, languages) -- require command-line flags that tell you that the semantics of code may be impacted by some other code that is identified in those flags. This is part of "integrity by default": https://openjdk.org/jeps/8305968

      • foobazgt an hour ago

        Just because something mucks with a program's AST doesn't mean that it's introducing a new "language". You wouldn't call using reflection, "creating a new language", either, and many of these libraries can be implemented either way. (Usually a choice between adding an additional build step, runtime overhead, and ease of implementation). It just really depends upon the details of the transform.

        The integrity by default JEPs are really about trying to reduce developers depending upon JDK/JRE implementation details, for example, sun.misc.Unsafe. From the JEP:

        "In short: The use of JDK-internal APIs caused serious migration issues, there was no practical mechanism that enabled robust security in the current landscape, and new requirements could not be met. Despite the value that the unsafe APIs offer to libraries, frameworks, and tools, the ongoing lack of integrity is untenable. Strong encapsulation and the restriction of the unsafe APIs — by default — are the solution."

        If you're dependent on something like ClassFileTransformer, -javaagent, or setAccessible, you'll just set a command-line flag. If you're not, it's because you're already doing this through other means like a custom ClassLoader or a build step.

  • awestroke 10 hours ago

    Rust, D, Nim, Crystal, Julia

    • elcritch 8 hours ago

      Definitely, you can do most of those things in Nim without macros using templates and compile time stuff. It’s preferable to macros when possible. Julia has fantastic compile time abilities as well.

      It’s beautiful to implement an incredibly fast serde in like 10 lines without requiring other devs to annotate their packages.

      I wouldn’t include Rust on that list if we’re speaking of compile time and compile time type abilities.

      Last time I tried it Rust’s const expression system is pretty limited. Rust’s macro system likewise is also very weak.

      Primarily you can only get type info by directly passing the type definition to a macro, which is how derive and all work.

      • tialaramex 5 hours ago

        Rust has two macro systems, the proc macros are allowed to do absolutely whatever they please because they're actually executing in the compiler.

        Now, should they do anything they please? Definitely not, but they can. That's why there's a (serious) macro which runs your Python code, and a (joke, in the sense that you should never use it, not that it wouldn't work) macro which replaces your running compiler with a different one so that code which is otherwise invalid will compile anyway...

      • int_19h 5 hours ago

        > Rust’s macro system likewise is also very weak.

        How so? Rust procedural macros operate on token stream level while being able to tap into the parser, so I struggle to think of what they can't do, aside from limitations on the syntax of the macro.

        • forrestthewoods 3 hours ago

          Rust macros are a mutant foreign language.

          A much much better system would be one that lets you write vanilla Rust code to manipulate either the token stream or the parsed AST.

          • dwattttt 12 minutes ago

            ...? Proc macros _are_ vanilla Rust code written to manipulate a token stream.

        • Nullabillity 4 hours ago

          Rust macros don't really understand the types involved.

          If you have a derive macro for

              #[derive(MyTrait)]
              struct Foo {
                  bar: Bar,
                  baz: Baz,
              }
          
          then your macro can see that it references Bar and Baz, but it can't know anything about how those types are defined. Usually, the way to get around it is to define some trait on both Bar and Baz, which your Foo struct depends on, but that still only gives you access to that information at runtime, not when evaluating your macro.

          Another case would be something like

              #[my_macro]
              fn do_stuff() -> Bar {
                  let x = foo();
                  x.bar()
              }
          
          Your macro would be able to see that you call the functions foo() and Something::bar(), but it wouldn't have the context to know the type of x.

          And even if you did have the context to be able to see the scope, you probably still aren't going to reimplement rustc's type inference rules just for your one macro.

          Scala (for example) is different: any AST node is tagged with its corresponding type that you can just ask for, along with any context to expand on that (what fields does it have? does it implement this supertype? are there any relevant implicit conversions in scope?). There are both up- and downsides to that (personally, I do quite like the locality that Rust macros enforce, for example), but Rust macros are unquestionably weaker.

    • rurban 7 hours ago

      Perl BEGIN blocks

      • tmtvl 4 hours ago

        PPR + keyword::declare (shame that Damien didn't actually call it keyword::keyword).

  • ephaeton 8 hours ago

    well, the lisp family of languages surely can do all of that, and more. Check out, for example, clojure's version of zig's dropped 'async'. It's a macro.

no_wizard 10 hours ago

I like the Zig language and tooling. I do wish there was a safety mode that give the same guarantees as Rust, but it’s a huge step above C/C++. I am also extremely impressed with the Zig compiler.

Perhaps the safety is the tradeoff with the comparative ease of using the language compared to Rust, but I’d love the best of both worlds if it were possible

  • ksec 9 hours ago

    >but I’d love the best of both worlds if it were possible

    I am just going to quote what pcwalton said the other day that perhaps answer your question.

    >> I’d be much more excited about that promise [memory safety in Rust] if the compiler provided that safety, rather than asking the programmer to do an extraordinary amount of extra work to conform to syntactically enforced safety rules. Put the complexity in the compiler, dudes.

    > That exists; it's called garbage collection.

    >If you don't want the performance characteristics of garbage collection, something has to give. Either you sacrifice memory safety or you accept a more restrictive paradigm than GC'd languages give you. For some reason, programming language enthusiasts think that if you think really hard, every issue has some solution out there without any drawbacks at all just waiting to be found. But in fact, creating a system that has zero runtime overhead and unlimited aliasing with a mutable heap is as impossible as finding two even numbers whose sum is odd.

    [1] https://news.ycombinator.com/item?id=43726315

    • the__alchemist 8 hours ago

      Maybe this is a bad place to ask, but: Those experienced in manual-memory langs: What in particular do you find cumbersome about the borrow system? I've hit some annoyances like when splitting up struct fields into params where more than one is mutable, but that's the only friction point that comes to mind.

      I ask because I am obvious blind to other cases - that's what I'm curious about! I generally find the &s to be a net help even without mem safety ... They make it easier to reason about structure, and when things mutate.

      • dzaima 3 hours ago

        I imagine a large part is just how one is used to doing stuff. Not being forced to be explicit about mutability and lifetimes allows a bunch of neat stuff that does not translate well to Rust, even if the desired thing in question might not be hard to do in another way. (but that other way might involve more copies / indirections, which users of manually-memory langs would (perhaps rightfully, perhaps pointlessly) desire to avoid if possible, but Rust users might just be comfortable with)

        This separation is also why it is basically impossible to make apples-to-apples comparisons between languages.

        Messy things I've hit (from ~5KLoC of Rust; I'm a Rust beginner, I primarily do C) are: cyclical references; a large structure that needs efficient single-threaded mutation while referenced from multiple places (i.e. must use some form of cell) at first, but needs to be sharable multithreaded after all the mutating is done; self-referential structures are roughly impossible to move around (namely, an object holding &-s to objects allocated by a bump allocator, movable around as a pair, but that's not a thing (without libraries that I couldn't figure out at least)); and refactoring mutability/lifetimes is also rather messy.

      • sgeisenh 7 hours ago

        Lifetime annotations can be burdensome when trying to avoid extraneous copies and they feel contagious (when you add a lifetime annotation to a frequently used type, it bubbles out to anything that uses that type unless you're willing to use unsafe to extend lifetimes). The solutions to this problem (tracking indices instead of references) lose a lot of benefits that the borrow checker provides.

        The aliasing rules in Rust are also pretty strict. There are plenty of single-threaded programs where I want to be able to occasionally read a piece of information through an immutable reference, but that information can be modified by a different piece of code. This usually indicates a design issue in your program but sometimes you just want to throw together some code to solve an immediate problem. The extra friction from the borrow checker makes it less attractive to use Rust for these kinds of programs.

        • bogdanoff_2 6 hours ago

          >There are plenty of single-threaded programs where I want to be able to occasionally read a piece of information through an immutable reference, but that information can be modified by a different piece of code.

          You could do that using Cell or RefCell. I agree that it makes it more cumbersome.

      • rc00 8 hours ago

        > What in particular do you find cumbersome about the borrow system?

        The refusal to accept code that the developer knows is correct, simply because it does not fit how the borrow checker wants to see it implemented. That kind of heavy-handed and opinionated supervision is overhead to productivity. (In recent times, others have taken to saying that Rust is less "fun.")

        When the purpose of writing code is to solve a problem and not engage in some pedantic or academic exercise, there are much better tools for the job. There are also times when memory safety is not a paramount concern. That makes the overhead of Rust not only unnecessary but also unwelcome.

        • charlotte-fyi 3 hours ago

          Isn't the persistent failure of developers to "know" that their code is correct the entire point? Unless you have mechanical proof, in the aggregate and working on any project of non-trivial size "knowing" is really just "assuming." This isn't academic or pedantic, it's a basic epistemological claim with regard to what writing software actually looks like in practice. You, in fact, do not know, and your insistence that you do is precisely the reason that you are at greater risk of creating memory safety vulnerabilities.

        • the__alchemist 8 hours ago

          Thank you for the answer! Do you have an example? I'm having a fish-doesn't-know-water problem.

          • int_19h 5 hours ago

            Basically anything that involves objects mutually referencing each other.

            • the__alchemist 5 hours ago

              Oh, that does sound tough in rust! I'm not even sure how to approach it; good to know it's a useful pattern in other langs.

              • int_19h 3 hours ago

                Well, one can always write unsafe Rust.

                Although the more usual pattern here is to ditch pointers and instead have a giant array of objects referring to each other via indices into said array. But this is effectively working around the borrow checker - those indices are semantically unchecked references, and although out-of-bounds checks will prevent memory corruption, it is possible to store index to some object only for that object to be replaced with something else entirely later.

                • estebank 2 hours ago

                  > it is possible to store index to some object only for that object to be replaced with something else entirely later.

                  That's what generational arenas are for, at the cost of having to check for index validity on every access. But that cost is only in comparison to "keep a pointer in a field" with no additional logic, which is bug-prone.

        • Ygg2 7 hours ago

          > The refusal to accept code that the developer knows is correct,

          How do you know it is correct? Did you prove it with pre-condition, invariants and post-condition? Or did you assume based on prior experience.

          • edflsafoiewq 7 hours ago

            One example is a function call that doesn't compile, but will if you inline the function body. Compilation is prevented only by the insufficient expressiveness of the function signature.

          • yohannesk 7 hours ago

            Writing correct code did not start after the introduction of the rust programming language

            • Ygg2 6 hours ago

              Nope, but claims of knowing to write correct code (especially C code) without borrow checker sure did spike with its introduction. Hence, my question.

              How do you know you haven't been writing unsafe code for years, when C unsafe guidelines have like 200 entries[1].

              [1]https://www.dii.uchile.cl/~daespino/files/Iso_C_1999_definit... (Annex J.2 page 490)

              • int_19h 5 hours ago

                It's not difficult to write a provably correct implementation of doubly linked list in C, but it is very painful to do in Rust because the borrow checker really hates this kind of mutually referential objects.

      • Starlevel004 7 hours ago

        Lifetimes add an impending sense of doom to writing any sort of deeply nested code. You get this deep without writing a lifetime... uh oh, this struct needs a reference, and now you need to add a generic parameter to everything everywhere you've ever written and it feels miserable. Doubly so when you've accidentally omitted a lifetime generic somewhere and it compiles now but then you do some refactoring and it won't work anymore and you need to go back and re-add the generic parameter everywhere.

        • pornel 6 hours ago

          There is a stark contrast in usability of self-contained/owning types vs types that are temporary views bound by a lifetime of the place they are borrowing from. But this is an inherent problem for all non-GC languages that allow saving pointers to data on the stack (Rust doesn't need lifetimes for by-reference heap types). In languages without lifetimes you just don't get any compiler help in finding places that may be affected by dangling pointers.

          This is similar to creating a broadly-used data structure and realizing that some field has to be optional. Option<T> will require you to change everything touching it, and virally spread through all the code that wanted to use that field unconditionally. However, that's not the fault of the Option syntax, it's the fault of semantics of optionality. In languages that don't make this "miserable" at compile time, this problem manifests with a whack-a-mole of NullPointerExceptions at run time.

          With experience, I don't get this "oh no, now there's a lifetime popping up everywhere" surprise in Rust any more. Whether something is going to be a temporary view or permanent storage can be known ahead of time, and if it can be both, it can be designed with Cow-like types.

          I also got a sense for when using a temporary loan is a premature optimization. All data has to be stored somewhere (you can't have a reference to data that hasn't been stored). Designs that try to be ultra-efficient by allowing only temporary references often force data to be stored in a temporary location first, and then borrowed, which doesn't avoid any allocations, only adds dependencies on external storage. Instead, the design can support moving or collecting data into owned (non-temporary) storage directly. It can then keep it for an arbirary lifetime without lifetime annotations, and hand out temporary references to it whenever needed. The run-time cost can be the same, but the semantics are much easier to work with.

        • the__alchemist 6 hours ago

          I guess the dodge on this one is not using refs in structs. This opens you up to index errors though because it presumably means indexing arrays etc. Is this the tradeoff. (I write loads of rusts in a variety of domains, and rarely need a manual lifetime)

          • quotemstr 4 hours ago

            And those index values are just pointers by another name!

            • estebank 2 hours ago

              It's not "just pointers", because they can have additional semantics and assurances beyond "give me the bits at this address". The index value can be tied to a specific container (using new types for indexing so tha you can't make the mistake of getting value 1 from container A when it represents an index from container B), can prevent use after free (by embedding data about the value's "generation" in the key), and makes the index resistant to relocation of the values (because of the additional level of indirection of the index to the value's location).

    • skybrian 9 hours ago

      Yes, but I’m not hoping for that. I’m hoping for something like a scripting language with simpler lifetime annotations. Is Rust going to be the last popular language to be invented that explores that space? I hope not.

      • hyperbrainer 8 hours ago

        I was quite impressed with Austral[0], which used Linear Types and avoids the whole Rust-like implementation in favour of a more easily understandable system, albeit slightly more verbose.

        [0]https://borretti.me/article/introducing-austral

        • renox 5 hours ago

          Austra's concept are interesting but the introduction doesn't show how to handle correctly errors in this language..

      • Philpax 8 hours ago

        You may be interested in https://dada-lang.org/, which is not ready for public consumption, but is a language by one of Rust's designers that aims to be higher-level while still keeping much of the goodness from Rust.

        • skybrian 8 hours ago

          The first and last blog post was in 2021. Looks like it’s still active on Github, though?

      • Ygg2 7 hours ago

        > Is Rust going to be the last popular language to be invented that explores that space? I hope not.

        Seeing how most people hate the lifetime annotations, yes. For the foreseeable future.

        People want unlimited freedom. Unlimited freedom rhymes with unlimited footguns.

        • xmorse 6 hours ago

          There is Mojo and Vale (which was created by a now Mojo core contributor)

    • no_wizard 5 hours ago

      I have zero issue with needing runtime GC or equivalent like ARC.

      My issue is with ergonomics and performance. In my experience with a range of languages, the most performant way of writing the code is not the way you would idiomatically write it. They make good performance more complicated than it should be.

      This holds true to me for my work with Java, Python, C# and JavaScript.

      What I suppose I’m looking for is a better compromise between having some form of managed runtime vs non managed

      And yes, I’ve also tried Go, and it’s DX is its own type of pain for me. I should try it again now that it has generics

      • neonsunset 3 hours ago

        Using spans, structs, object and array pools is considered fairly idiomatic C# if you care about performance (and many methods now default to just spans even outside that).

        What kind of idiomatic or unidiomatic C# do you have in mind?

        I’d say if you are okay with GC side effects, achieving good performance targets is way easier than if you care about P99/999.

    • spullara 8 hours ago

      With Java ZGC the performance aspect has been fixed (<1ms pause times and real world throughput improvement). Memory usage though will always be strictly worse with no obvious way to improve it without sacrificing the performance gained.

      • estebank 2 hours ago

        IMO the best chance Java has to close the gap on memory utilisation is Project Valhalla[1] which brings value types to the JVM, but the specifics will matter. If it requires backwards incompatible opt-in ceremony, the adoption in the Java ecosystem is going to be an uphill battle, so the wins will remain theoretical and be unrealised. If it is transparent, then it might reduce the memory pressure of Java applications overnight. Last I heard was that the project was ongoing, but production readiness remained far in the future. I hope they pull it off.

        1: https://openjdk.org/projects/valhalla/

  • xedrac 9 hours ago

    I like Zig as a replacement for C, but not C++ due to its lack of RAII. Rust on the other hand is a great replacement for C++. I see Zig as filling a small niche where allocation failures are paramount - very constrained embedded devices, etc... Otherwise, I think you just get a lot more with Rust.

    • rastignack 9 hours ago

      Compile times and painful to refactor codebase are rust’s main drawbacks for me though.

      It’s totally subjective but I find the language boring to use. For side projects I like having fun thus I picked zig.

      To each his own of course.

      • nicce 7 hours ago

        > refactor codebase are rust’s main drawbacks

        Hard disagree about refactoring. Rust is one of the few languages where you can actually do refactoring rather safely without having tons of tests that just exist to catch issues if code changes.

        • rastignack 7 hours ago

          Lifetimes and generic tend to leak so you have to modify your code all around the place when you touch them though.

    • xmorse 9 hours ago

      Even better than RAII would be linear types, but it would require a borrow checker to track the lifetimes of objects. Then you would get a compiler error if you forget to call a .destroy() method

  • hermanradtke 10 hours ago

    I wish for “strict” mode as well. My current thinking:

    TypeScript is to JavaScript

    as

    Zig is to C

    I am a huge TS fan.

    • rc00 9 hours ago

      Is Zig aiming to extend C or extinguish it? The embrace story is well-established at this point but the remainder is often unclear in the messaging from the community.

      • PaulRobinson 8 hours ago

        It's improved C.

        C interop is very important, and very valuable. However, by removing undefined behaviours, replacing macros that do weird things with well thought-through comptime, and making sure that the zig compiler is also a c compiler, you get a nice balance across lots of factors.

        It's a great language, I encourage people to dig into it.

      • yellowapple 6 hours ago

        The goal rather explicitly seems to be to extinguish it - the idea being that if you've got Zig, there should be no reason to need to write new code in C, because literally anything possible in C should be possible (and ideally done better) in Zig.

        Whether that ends up happening is obviously yet to be seen; as it stands there are plenty of Zig codebases with C in the mix. The idea, though, is that there shouldn't be anything stopping a programmer from replacing that C with Zig, and the two languages only coexist for the purpose of allowing that replacement to be gradual.

      • dooglius 9 hours ago

        Zig is open source, so the analogy to Microsoft's EEE [0] seems misplaced.

        [0] https://en.m.wikipedia.org/wiki/Embrace,_extend,_and_extingu...

        • rc00 9 hours ago

          Open source or not isn't the point. The point is the mission and the ecosystem. Some of the Zig proponents laud the C compatibility. Others are seeking out the "pure Zig" ecosystem. Curious onlookers want to know if the Zig ecosystem and community will be as hostile to the decades of C libraries as the Rust zealots have been.

          To be fair, I don't believe there is a centralized and stated mission with Zig but it does feel like the story has moved beyond the "Incrementally improve your C/C++/Zig codebase" moniker.

          • Zambyte 8 hours ago

            > Curious onlookers want to know if the Zig ecosystem and community will be as hostile to the decades of C libraries as the Rust zealots have been.

            Definitely not the case in Zig. From my experience, the relationship with C libraries amounts to "if it works, use it".

            • rc00 8 hours ago

              Are you referring to static linking? Dynamic linking? Importing/inclusion? How does this translate (no pun intended) when the LLVM backend work is completed? Does this extend to reproducible builds? Hermetic builds?

              And the relationship with C libraries certainly feels like a placeholder, akin to before the compiler was self-hosted. While I have seen some novel projects in Zig, there are certainly more than a few "pure Zig" rewrites of C libraries. Ultimately, this is free will. I just wonder if the Zig community is teeing up for a repeat of Rust's actix-web drama but rather than being because of the use of unsafe, it would be due to the use of C libraries instead of the all-Zig counterparts (assuming some level of maturity with the latter). While Zig's community appears healthier and more pragmatic, hype and ego have a way of ruining everything.

              • Zambyte 8 hours ago

                > static linking?

                Yes

                > Dynamic linking?

                Yes

                > Importing/inclusion?

                Yes

                > How does this translate (no pun intended) when the LLVM backend work is completed?

                I'm not sure what you mean. It sounds like you think they're working on being able to use LLVM as a backend, but that has already been supported, and now they're working on not depending on LLVM as a requirement.

                > Does this extend to reproducible builds?

                My hunch would be yes, but I'm not certain.

                > Hermetic builds?

                I have never heard of this, but I would guess the same as reproducible.

                > While I have seen some novel projects in Zig, there are certainly more than a few "pure Zig" rewrites of C libraries.

                It's a nice exercise, especially considering how close C and Zig are semantically. It's helpful for learning to see how C things are done in Zig, and rewriting things lets you isolate that experience without also being troubled with creating something novel.

                For more than a few not rewrites, check out https://github.com/allyourcodebase, which is a group that repackages existing C libraries with the Zig package manager / build system.

          • ephaeton 8 hours ago

            zig's C compat is being lowered from 'comptime' equivalent status to 'zig build'-time equivalent status. When you'll need to put 'extern "C"' annotations on any import/export to C, it'll have gone full-circle to C++ C compat, and thus be none the wiser.

            andrewrk's wording towards C and its main ecosystem (POSIX) is very hostile, if that is something you'd like to go by.

ww520 9 hours ago

This is a very educational blog post. I knew ‘comptime for’ and ‘inline for’ were comptime related, but didn’t know the difference. The post explains the inline version only knows the length at comptime. I guess it’s for loop unrolling.

  • hansvm 6 hours ago

    The normal use case for `inline for` is when you have to close over something only known at compile time (like when iterating over the fields of a struct), but when your behavior depends on runtime information (like conditionally assigning data to those fields).

    Unrolling as a performance optimization is usually slightly different, typically working in batches rather than unrolling the entire thing, even when the length is known at compile time.

    The docs suggest not using `inline` for performance without evidence it helps in your specific usage, largely because the bloated binary is likely to be slower unless you have a good reason to believe your case is special, and also because `inline` _removes_ optimization potential from the compiler rather than adding it (its inlining passes are very, very good, and despite having an extremely good grasp on which things should be inlined I rarely outperform the compiler -- I'm never worse, but the ability to not have to even think about it unless/until I get to the microoptimization phase of a project is liberating).

paldepind2 8 hours ago

This is honestly really cool! I've heard praises about Zig's comptime without really understanding what makes it tick. It initially sounds like Rust's constant evaluation which is not particularly capable. The ability to have types represented as values at compilation time, and _only_ at compile time, is clearly very powerful. It approximates dynamic languages or run-time reflection without any of the run-time overhead and without opening the Pandora's box that is full blown macros as in Lisp or Rust's procedural macros.

forrestthewoods 6 hours ago

> When you execute code at compile time, on which machine does it execute? The natural answer is “on your machine”, but it is wrong!

I don’t understand this.

If I am cross-compiling a program is it not true that comptime code literally executes on my local host machine? Like, isn’t that literally the definition of “compile-time”?

If there is an endian architecture change I could see Zig choosing to emulate the target machine on the host machine.

This feels so wrong to me. HostPlatform and TargetPlatform can be different. That’s fine! Hiding the host platform seems wrong. Can aomeone explain why you want to hide this seemingly critical fact?

Don’t get me wrong, I’m 100% on board the cross-compile train. And Zig does it literally better than any other compiled language that I know. So what am I missing?

Or wait. I guess the key is that, unlike Jai, comptime Zig code does NOT run at compile time. It merely refers to things that are KNOWN at compile time? Wait that’s not right either. I’m confused.

  • int_19h 5 hours ago

    The point is that something like sizeof(pointer) should have the same value in comptime code that it has at runtime for a given app. Which, yes, means that the comptime interpreter emulates the target machine.

    The reason is fairly simple: you want comptime code to be able to compute correct values for use at runtime. At the same time, there's zero benefit to not hiding the host platform in comptime, because, well, what use case is there for knowing e.g. the size of pointer in the arch on which the compiler is running?

    • forrestthewoods 4 hours ago

      > Which, yes, means that the comptime interpreter emulates the target machine.

      Reasonable if that’s how it works. I had absolutely no idea that Zig comptime worked this way!

      > there's zero benefit to not hiding the host platform in comptime

      I don’t think this is clear. It is possibly good to hide host platform given Zig’s more limited comptime capabilities.

      However in my $DayJob an extremely common and painful source of issues is trying to hide host platform when it can not in fact be hidden.

      • int_19h 3 hours ago

        Can you give an example of a use case where you wouldn't want comptime behavior to match runtime, but instead expose host/target differences?

        • forrestthewoods 3 hours ago

          Let’s pretend I was writing some compile-time code that generates code. For example maybe I’m generating serde code. Or maybe I’m generating bindings for C, Python, etc.

          My generation code is probably going to allocate some memory and have some pointers and do some stuff. Why on earth would I want this compile-time code to run on an emulated version of the target platform? If I’m on a 64-bit platform then pointers are 8-bytes why would I pretend they aren’t? Even if the target is 32-bit?

          Does that make sense? If the compiletime code ONLY runs on the host platform then you plausibly need to expose both host and target.

          I’m pretty sure I’m thinking about zig comptime all wrong. Something isn’t clicking.