chubot 3 days ago

This is already known as "multi-stage programming" or "staged programming" -- I don't see a need for a new term

https://en.wikipedia.org/wiki/Multi-stage_programming

https://okmij.org/ftp/meta-programming/index.html

Comment from 2019 about it, which mentions Zig, Terra/Lua, Scala LMS, etc.:

https://news.ycombinator.com/item?id=19013437

We should also mention big data frameworks and ML frameworks like TensorFlow / Pytorch.

The "eager mode" that Chris Lattner wanted in Swift for ML and Mojo is to actually to get rid of the separation between the stage of creating a graph of operators (in Python, serially) and then evaluating the graph (on GPUs, in parallel).

And also CMake/Make and even autoconf/make and Bazel have stages -- "programming" a graph, and then executing it in parallel:

Language Design: Staged Execution Models - https://www.oilshell.org/blog/2021/04/build-ci-comments.html...

noelwelsh 3 days ago

Like the other comment mentioned, this is staging.

> And compared to Lisps like Scheme and Racket which support hygenic macros, well, Zig doesn’t require everything to be a list.

This comment is a bit ignorant. Racket has the most advanced staging system of any language that I'm aware of. You can build languages in Racket with conventional yet extensible syntax: https://docs.racket-lang.org/rhombus/index.html Zig's metaprogramming facilities are very simple in comparison.

I think staging could be extremely useful in many application, and I wish it was better supported in mainstream langauges.

  • eru 3 days ago

    I guess accusing Lisps of only supporting lists as their data structures has the pedigree of a long established tradition by now? (Though I wish they would at least accuse them of everything having to be cons-pairs.)

    • Y_Y 2 days ago

      My programs are single atoms

      • eru 2 days ago

        I thought the whole point of Lisp was that it allows you to compose bigger programs out of smaller pieces?

        • acka 2 days ago

          Please don't distract Y_Y, they may be well on their way to Lisp NILvana: realizing that everything else is impermanent and unsatisfactory. NIL is ultimate bliss.

          • Y_Y 2 days ago

            "Perfection is achieved ... when there is nothing [nil] left ..." ― Antoine de Saint-Exupéry

  • samatman 2 days ago

    > Zig's metaprogramming facilities are very simple in comparison.

    They work differently, is the main thing. Racket's #lang extensions are very sophisticated indeed, but/and they do what Lisp-style compile-time evaluation has always done: they build up lists which represent the program, what we'd call an AST in most other languages. Yes, I'm well aware that Racket has more data structures than lists! But that is the output of front ends like Rhombus. Some of those lists represent a program which creates a hash map and so on. That's fine.

    Zig comptime is part of the compiling process. Sometimes this mutates the AST or IR, often it does not, instead producing object code or .rodata which is embedded into the final binary. The current implementation is a bit ad-hoc, with some arbitrary limitations (no allocation being the big one), but it's a solid design. Already quite useful, even eloquent, and I'm optimistic that the final form will be a real thing of beauty.

    But it isn't accurate to say that it's 'very simple' in comparison to Racket. They spend their complexity budget in different places.

    • noelwelsh 2 days ago

      I'm a bit confused by what you're saying. Here are a few points:

      1. If you are claiming that "compile-time evaluation" (macros) in Racket works with lists (i.e. the input to a macro is a list, and the output from a macro is a list), that is false. It works with syntax objects: https://docs.racket-lang.org/guide/stx-obj.html

      2. A macro can do anything. It usually emits syntax, but because it's just code it can print output, play a jaunty tune, or implement type checking.

      It seems to me that Racket's macros are strictly a super-set of Zig's comptime. Like comptime a macro can do arbitrary computation. However AFAIK there only two stages in Zig (compile-time and run-time) while Racket has an arbitrary number of stages, and macros can, of course, implement new syntactic forms.

      • samatman 2 days ago

        > If you are claiming that "compile-time evaluation" (macros) in Racket works with lists (i.e. the input to a macro is a list, and the output from a macro is a list), that is false. It works with syntax objects

        This is a distinction without a difference. From your quote:

        > A syntax object contains symbols, lists, and constant values (such as numbers) that essentially correspond to the quoted form of the expression.

        If it's important to you that it's actually atoms and lists, well, some of us generalize better than others. But calling it false? Lame. As is your feigned confusion.

        > A macro can do anything

        Syntax and side effects. Take your pick.

        > It seems to me that Racket's macros are strictly a super-set of Zig's comptime.

        The difference here is that you have experience with one of these systems. I have experience with both. I doubt further interactions would be informative for either of us.

        • noelwelsh 2 days ago

          If you want to take this as a confrontation instead of a discussion I agree it's not worthwhile continuing.

          My confusion is real. I don't get your point. How does "they do what Lisp-style compile-time evaluation has always done: they build up lists" delineate Racket's capabilities compared to Zig? It's false on the surface (syntax objects are not lists) and it's false at a deeper level (macros can do anything). Even if it were true, I don't understand the significance of using lists, or not, as a representation of a program. I know, from experience, you can put any value into a syntax object / list so ¯\_(ツ)_/¯

          I don't understand what "Zig comptime is part of the compiling process" means in contrast to Racket. How does this differentiate between the two? Macros provide an API to the compiler, are part of the compiling process, and you can insert, e.g, assembly code generated by a macro into a Racket program (I did this a long time ago).

          > Syntax and side effects. Take your pick.

          What does this mean? "Take your pick" implies xor, but you use and. Is this a typo?

          If you doubt that macros can do arbitrary side-effects, put the following in a file (e.g. "comptime.rkt") and from Racket call (require "comptime.rkt")

          #lang racket

          (define-syntax when (lambda (stx) (begin (println "Compile-time") (datum->syntax stx '(println "b")))))

          (begin (println "a") (when) (println "c"))

          You'll see the output

          "Compile-time" "a" "b" "c"

          The println in the macro when runs before the code in the begin form evaluates. I.e. it is running at compile-time.

          • samatman 2 days ago

            > I don't understand what "Zig comptime is part of the compiling process" means in contrast to Racket.

            You've made that clear, yes.

taliesinb 3 days ago

The end-game is just dissolving any distinction between compile-time and run-time. Other examples of dichotomies that could be partially dissolved by similar kinds of universal acid:

* dynamic typing vs static typing, a continuum that JIT-ing and compiling attack from either end -- in some sense dynamically typed programs are ALSO statically typed -- with all function types are being dependent function types and all value types being sum types. After all, a term of a dependent sum, a dependent pair, is just a boxed value.

* monomorphisation vs polymorphism-via-vtables/interfaces/protocols, which trade roughly speaking instruction cache density for data cache density

* RC vs GC vs heap allocation via compiler-assisted proof of memory ownership relationships of how this is supposed to happen

* privileging the stack and instruction pointer rather than making this kind of transient program state a first-class data structure like any other, to enable implementing your own co-routines and whatever else. an analogous situation: Zig deciding that memory allocation should NOT be so privileged as to be an "invisible facility" one assumes is global.

* privileging pointers themselves as a global type constructor rather than as typeclasses. we could have pointer-using functions that transparently monomorphize in more efficient ways when you happen to know how many items you need and how they can be accessed, owned, allocated, and de-allocated. global heap pointers waste so much space.

Instead, one would have code for which it makes more or less sense to spend time optimizing in ways that privilege memory usage, execution efficiency, instruction density, clarity of denotational semantics, etc, etc, etc.

Currently, we have these weird siloed ways of doing certain kinds of privileging in certain languages with rather arbitrary boundaries for how far you can go. I hope one day we have languages that just dissolve all of this decision making and engineering into universal facilities in which the language can be anything you need it to be -- it's just a neutral substrate for expressing computation and how you want to produce machine artifacts that can be run in various ways.

Presumably a future language like this, if it ever exists, would descend from one of today's proof assistants.

  • packetlost 3 days ago

    > The end-game is just dissolving any distinction between compile-time and run-time

    This was done in the 60s/70s with FORTH and LISP to some degree, with the former being closer to what you're referring to. FORTH programs are typically images of partial applications state that can be thought of as a pile of expanded macros and defined values/constants (though there's virtually no guardrails).

    That being said, I largely agree with you on several of these and think would like to take it one step further: I would like a language with 99% bounded execution time and memory usage. The last 1% is to allow for daemon-like processes that handle external events in an "endless" loop and that's it. I don't really care how restricted the language is to achieve that, I'm confident the ergonomics can be made to be pleasant to work with.

    • astrobe_ 2 days ago

      > This was done in the 60s/70s with FORTH and LISP to some degree

      Around 2000, Chuck Moore dissolved compile-time, run-time and edit-time with ColorForth, and inverted syntax highlighting in the process (programmer uses colors to indicate function).

      • packetlost 2 days ago

        FORTH was around long before ColorForth

    • andyferris 3 days ago

      Yeah that would be cool. For bounded execution, you should look at “total functional programming” (which always terminates).

      They have this concept of codata for the other 1% to make practical, interactive apps - codata represents things like event streams.

  • noelwelsh 3 days ago

    > The end-game is just dissolving any distinction between compile-time and run-time.

    I don't think this is actually desireable. This is what Smalltalk did, and the problem is it's very hard to understand what a program does when any part of it can change at any time. This is problem for both compilers and programmers.

    It's better, IMO, to be able to explicitly state the stages of the program, rather than have two (compile-time and run-time) or one (interpreted languages). As a simple example, I want to be able to say "the configuration loads before the main program runs", so that the configuration values can be inlined into the main program as they are constant at that point.

    • naasking 2 days ago

      > This is what Smalltalk did, and the problem is it's very hard to understand what a program does when any part of it can change at any time.

      I don't think dissolving this difference necessarily results in Smalltalk-like problems. Any kind of principled dissolution of this boundary must ensure the soundness of the static type system, otherwise they're not really static types, so the dynamic part should not violate type guarantees. It could look something like "Type Systems as Macros":

      https://www.khoury.northeastern.edu/home/stchang/popl2017/

    • larsrc 3 days ago

      There are pretty important reasons to have this distinction. You want to be able to reason about what code actually gets executed and where. Supply-chain attacks will only get easier if this line gets blurred, and build systems work better when the compile stage is well-defined.

    • gnulinux 2 days ago

      That's not necessarily the only approach though. In a dependently typed language like Agda, there is also no difference between compile-time or runtime computation, not because things can change any time (Agda is compiled to machine code and is purely functional) but because types are first-class citizens, so any compute you expect to be able to run at runtime, you can run at compile time. Of course, this is in practice very problematic since you can make compiler infinitely loop, so Agda deals with that by automatically proving each program will halt (i.e. it's Turing-incomplete). If it's not possible to prove, Agda will reject to compile, or alternatively programmer can use a pragma to force it (in which case programmer can make the compiler run infinitely).

      • kazinator a day ago

        Why obsess over the compiler running into a loop? That's way less harmful than the compiled application getting into a loop on the end user's machine.

        • noelwelsh a day ago

          I think part of the purpose of the Agda project is to see how far they can push programs as proofs, which in turn means they care a lot about termination. A language that was aimed more at industrial development would not have this restriction.

          • kazinator 9 hours ago

            If you want to prove propositions at compile time, isn't it hampering not to have Turing-complete power?

  • kazinator a day ago

    > The end-game is just dissolving any distinction between compile-time and run-time.

    No it isn't; nobody wants that. Or not all the time.

    We'd like to use the same language at compile time and run-time.

    But it's useful for compile time to happen here, on our build system, and run-time on the customer's system.

    We don't want those to be the same system, or at least not in production with the actual customer.

pmontra 3 days ago

A question to the author, about a choice in language design.

  // Import some libraries.
  bring s3;
If the keyword was the usual "import" there would be no need to explain what "bring" is. Or, if "bring" is so good, why not

  // Bring some libraries.

?
  • chriscbr 2 days ago

    Totally fair. The choice to name it "bring" instead of "import" or "use" was mainly to add some flavor to the language, and make it easier to distinguish from the top of the file that "ah, this is a Wing code snippet, not a Zig/TypeScript/Python code snippet".

funcDropShadow 3 days ago

This is also a special case of what MetaOCaml calls multi-stage programming. It does not only support two phases but arbitrary many. Some similar prototype also exists for some older Scala version. And Lisp and Forth obviously also support n-phases of computation.

warpspin 3 days ago

He missed one of the earliest examples of "languages and frameworks that enable identical syntax to express computations executed in two distinct phases" - immediate words in Forth: https://www.forth.com/starting-forth/11-forth-compiler-defin...

  • Someone 3 days ago

    Immediate words don’t cover the “while maintaining consistent behavior (i.e., semantics) across phases” of the definition given, do they?

    I think normal forth words are way closer to that. They (1) normally just do whatever their definition implies, but inside a colon definition, they (1) compile code that does whatever their definition implies.

    They do miss C++ constexpr (https://en.cppreference.com/w/cpp/language/constexpr). I haven’t read Zig docs, but that seems highly similar to Zig’s comptime to me.

    (1) technically, it’s not “they” doing that themselves but whatever code processes them.

    • kragen 3 days ago

      yeah, the forth feature that actually does this is not immediate words in general, but the two immediate words [ and ], which allow you to do arbitrary ad hoc computations at compile time instead of at run time

      but also you can just put the code you would have put between the [ ] delimiter words into an immediate word, and call it where you would have put the [ ] block. the effect is not exactly the same but it has semantics slightly more consistent with the usual semantics because your immediate word is in fact compiled just like non-immediate words are

  • Munksgaard 3 days ago

    Maybe I'm missing something, but isn't Lisp the original version of this?

    • TeMPOraL 3 days ago

      Pretty much. And not just via macros.

      Lisp languages tend to blend together "runtime" with "load time", and in case of compiled languages, also "compile time". You can write code executing during any one, or any combination of, these phases. You can reuse code between those phases. You can interleave them at will - e.g. by loading more code at runtime, or invoking a compiler, etc.

    • JonChesterfield 2 days ago

      Lisp was also the one that taught us that interpreters and compilers for nominally the same language really need to have consistent semantics, e.g. don't lexically scope in the compiler and dynamically scope in the interpreter.

    • warpspin 3 days ago

      Well, Lisp macros are mentioned in a footnote at least, and yes, maybe even the oldest version of this idea.

      • kragen 3 days ago

        lisp macros are probably newer than forth, but i don't know that they're newer than the words [ and ] in forth. but lisp did permit arbitrary compile-time computation from the beginning, i believe

    • kragen 3 days ago

      probably so, yes, but forth would be #2

cb321 3 days ago

Nim & D also have the compile-time function evaluation he mentions for Zig. Nim also has a full macro system wherein macros are written in Nim - just taking & producing ASTs. I've known people to refer to this/Julia macro systems as "homoiconic". Nim also has a javascript backend to enable similar same-syntax on client&server like his React & clojure examples.

thelittlenag 3 days ago

I've been thinking similar thoughts recently since I've been exploring metaprogramming in Scala and how it can be extended to beyond the simplistic hygenic model it currently supports.

What I recently realized is that while compilers in the standard perspective process a language into an AST, do some transformations, and then output some kind of executable, from another perspective they are really no different than interpreters for a DSL.

There tends to be this big divide between what we call a compiler and what we call an interpreter. And we classify languages as being either interpreted or compiled.

But what I realized, as I'm sure many others have before me, is that that distinction is very thin.

What I mean is this: from a certain perspective a compiler is really just an interpreter for the meta language that encodes and hosts the compiled language. The meta-language directs the compiler, generally via statements, to synthesize blocks of code, create classes with particular shapes, and eventually write out certain files. These meta-languages don't support functions, or control flow, or variables, in fact they are entirely declarative languages. And yet they are the same as the normal language being compiled.

To a certain degree I think the biphasic model captures this distinction well. Our execution/compilation models for languages don't tend to capture and differentiate interpreter+script from os+compiled-binary very well. Or where they do they tend to make metaprogramming very difficult. I think finding a way to unify those notions will help languages if and when they add support for metaprogramming.

  • webnrrd2k 3 days ago

    You'd really enjoy The Structure and Interpretation of Computer Programs. One of the big lessons is that it's basically interpreters all the way down.

    Even hardware is, at some point, "programmed" by someone to behave a certain way.

    • WJW 2 days ago

      A CPU is really just a fast interpreter for machine code.

EricRiese 3 days ago

Raku has this

https://docs.raku.org/language/phasers

It has many more than 2 phases.

Phasers is one of the ideas Raku takes as pretty core and really runs with it. So in addition to compile time programming, it has phasers for run time events like catching exceptions and one that's equivalent to the defer keyboard in several languages.

graypegg 3 days ago

I wonder if something like Ruby could fit into this category too, even though there isn’t a clean line between the two phases. (I’m stretching the concept a bit heh)

The block inside of a class or module definition is executed first, and then the application can work on the resulting structure generated after that pass. Sorbet (a Ruby static typing library) uses this first-pass to generate its type metadata, without running application code. (I think by stubbing the class and module classes themselves?)

StiffFreeze9 3 days ago

Other "biphasic"-like aspects of programming languages and code:

- Documentation generated from inline code comments (Knuth's literate programming)

- Test code

We could expand to

- security (beyond perl taint)

- O(n) runtime and memory analysis

- parallelism or clustering

- latency budgets

And for those academically inclined, formal language semantics like https://en.wikipedia.org/wiki/Denotational_semantics versus operational and others..

gsuuon 2 days ago

My toy language project is also built around multi-stage (though the way it's formed it's more like literate programming) and partly motivated by writing cloud-native applications. I played around with a sketch of this idea implemented using F# computation expressions[1] and partly implemented an Azure backend, at a high level it appears pretty similar to Winglang. When run at "comptime" / CLI, it spins up those resources if necessary and then produces artifacts via msbuild task for servers that run the "runtime" part of the code. The computation expression handles exposing a client and forming the ARM template based on the context. It gets around the inflight/preflight distinction by including the entire app (including provisioning stuff) in each runtime instance, so references outside of route scopes work (instance-globally, not app-globally).

Very excited for multi-stage - especially it's potential to provide very good LSP/diagnostics for library users (and authors). It's hard to provide good error messages from libraries for static errors that are hard to represent in the type system, so sometimes a library user sees vague/unrelated errors.

[1] https://github.com/gsuuon/kita/blob/d741c0519914369da9c89241...

jalk 3 days ago

"Biphasic programming" is also present in frameworks like Apache Spark, Tensorflow, build tools like Gradle and code-first workflow engines. Execution of the first phase generates a DAG of code to be executed later. IMO the hardest thing for newcomers is when phase 1 and phase 2 code is interleaved with no immediate clear boundaries, (phase 1 code resembles an internal DSL). The docs need to teach this early on to avoid confusion. A prime offender of this is SBT, with its (perhaps no longer true) 3 stage rocket, which is not really described in the docs (see https://www.lihaoyi.com/post/SowhatswrongwithSBT.html#too-ma...)

a1o 3 days ago

I don't get the dismissal of C++, to me constexpr is exactly that! And now if we get reflection in C++26 it will be possible to do even more incredible things using it, but constexpr is already pretty good.

  • joatmon-snoo 3 days ago

    Er- kinda? Maybe? Not really?

    constexpr does not mean that you can evaluate arbitrary C++ code at compile time. It allows you to evaluate a _very specific subset_ of C++ at compile time that is not at all easy to wrap your head around: look no further than https://en.cppreference.com/w/cpp/language/constexpr to understand the limitations.

kragen 3 days ago

this 'biphasic programming' thing is item #9 in pg's list of 'what made lisp different' from 02001: https://paulgraham.com/diff.html

it's interesting to read this biphasic programming article in the context of pg's tendentious reading of programming language history

> Over time, the default language, embodied in a succession of popular languages, has gradually evolved toward Lisp. 1-5 are now widespread. 6 is starting to appear in the mainstream. Python has a form of 7, though there doesn't seem to be any syntax for it. 8, which (with 9) is what makes Lisp macros possible, is so far still unique to Lisp, perhaps because (a) it requires those parens, or something just as bad, and (b) if you add that final increment of power, you can no longer claim to have invented a new language, but only to have designed a new dialect of Lisp ; -)

it of course isn't absolutely unique to lisp; forth also has it

i think the academic concept of 'staged programming' https://scholar.google.com/scholar?cites=2747410401001453059... is a generalization of this, and partial evaluation is a very general way to blur the lines between compile time and run time

  • kazinator 3 days ago

    I suspect what we don't have is the ability for Forth words to act as symbols that can be given a completely different meaning.

    Can we have:

      oof dup rot swap foo
    
    where oof is a delimiter pairing with foo, both of which we developed? It causes dup, rot and swap not to execute but somehow be accumulated as just symbols; then foo interprets them in such a way that they are unrelated to duplicating, rotating and swapping stack elements.

    Word definitions do something like this. There is a : (colon) word which causes the next word to be interpreted as a name for a new definition and then subsequent words until a semicolon are shored up into the definition. But that's a fixed thing, built into the language. It's not defined in the Forth standard as a symbolic quoting mechanism.

    • alexisread 2 days ago

      Lots of forths can quote, which is kind of what you are asking here - https://retroforth.org/Handbook-Latest.txt Retro uses operators on the quote.

      Similarly, Freeforth (anonymous definitions) and Able Forth (same) can do this. You also have aliasing and hooks in the languages as other tools.

      The way Rebol and derivatives (Red, Rye) treat blocks and dialects [myword yourword] executedword is similar, though they are all interpreted, not compiled.

      • kragen 2 days ago

        i think you've misunderstood what kaz was asking

        retroforth quotes are just anonymous definitions, and the words inside of them have the same meaning they would have in any other definition. as far as i can tell, you can't even index into them or query their length the way you can with {executable arrays} in postscript. i think the same is true of anonymous definitions in freeforth and able forth

        ansi standard forth has :noname https://forth-standard.org/standard/core/ColonNONAME which is the same thing as retroforth quotes, except that it doesn't nest within other definitions, so you can't use it to define properly nesting control structures, the way retroforth does

        but kaz was asking if it's possible to construct a context that gives the words inside it a completely different meaning, so that you can interpret dup, rot, and swap in a way that is unrelated to duplicating, rotating and swapping stack elements. this is in fact possible in ansi forth and in most other forths (my comment sibling to yours explains how), although i don't know if it's possible in the forths you've mentioned

    • kragen 2 days ago

      i think this is three lines of code in ansi standard forth. in more detail:

      you can do this by having oof parse words from the input stream until it parses foo, and all the facilities for doing that are included in the ansi standard (i.e., it doesn't require knowledge of theoretically private implementation details of a given forth). there aren't any standard words that work this way, although, as you're probably aware, \ " .( ( char [char] c" s" and ." do consume data from the input stream in various ways (but without parsing it into words), and in particular ' ['] create value variable constant marker parse-name to defer is and postpone all read a single word from the input and do various things with it

      i'm no forth expert but i think you can define your desired oof as follows in ans forth:

          : roof begin parse-name                             \ read oof
                  2dup s" foo" compare 0= if 2drop exit then  spoof again ;
          : oof immediate goof roof proof ;
      
      what this does is determined by pre-existing definitions for goof, spoof, and proof. spoof somehow accumulates a word, while goof and proof are invoked at the beginning and end of the string of words. the simplest interesting thing to do is to concatenate them in a buffer and type them out, which can be accomplished by providing the following definitions before compiling the above:

          create boof 256 allot  0 value poof          \ buffer and pointer for oof
          : proof boof poof type ;  : goof 0 to poof ;        \ print oof, gone oof
          : spoof >r poof boof + r@ move  r> poof + to poof ; \ string put for poof
      
      if you defer goof, spoof, and proof, you can change what they do without recompiling oof and roof

      i can't swear that this is ansi-compliant forth but i did test it in gforth and pfe. i also tried testing it in yforth and jonesforth but couldn't get them to run on this amd64 linux

      the one debatable thing here is that you said 'symbols', but spoof's arguments are just a string pointer and a length, probably in some kind of input buffer. forth doesn't natively have symbols in the lisp sense, but it does have something very similar, which is a dictionary of words. spoof can look up a word in the dictionary in the same way ' or ['] would, by using the word find†, which returns a pointer to the word's dictionary entry (a so-called 'execution token')

      this may sound suspiciously like symbol interning in lisp, and if the naming of ' in forth isn't inspired by the lisp readmacro of the same name, it's at least a damned suspicious coincidence. but the semantics are different from interning in an important way, which is why i didn't use find in my definition of roof above: if the word you're searching for hasn't been defined, find doesn't add it to the dictionary. for dup, rot, and swap, you'd be fine, but if you stuck a bar or a quux in there, find would return 0 (and the original string). also, in forth, you can have more than one word with the same spelling, and find will only find the latest one that's still in scope, which is a different behavior from lisp symbols

      if you want the lisp symbol behavior, you'd have to define your own obarray and intern, which is not too hard

      ______

      † the standard word find wants a counted string, and that's enough hassle that many implementations like gforth provide a find-name which takes a string in the format provided by parse-name instead, and define find in terms of find-name. but find-name isn't in the ans standard, and you can define it in terms of find if you have to

mikewarot 3 days ago

Since we're going down the road of interesting ideas, let's add declarative programming to the mix

The Metamine language allowed for a magic equals := if I recall correctly, which had the effect of always updating the result anytime the assigned value changed for the rest of the life of the program. Mixing it with normal assignments and code made for some interesting capabilities.

JonChesterfield 2 days ago

I'm pretty sure staged programming is a design mistake induced by the capabilities of computers in the ~70s. Needing to pay attention to which parts of a program have already been compiled and which haven't is totally orthogonal to whatever problem you're trying to solve with the computer. It's going to go the way of manual memory management.

The implementation shall be JIT compiled with a separate linter running in the editor for that is the right thing.

We aren't there yet but I believe it's where we'll end up.

zamalek 3 days ago

For what its worth I like the function coloring Rust has, I don't believe compilation results should vary across separate runs. It's the same spirit as the rest of the language: highly predictable. The likes of diesel are very cool, but still amount to a big fat "yikes" from me.

I think the actual problem is the glacial pace of applying it, and the lack of support in trait impls (e.g. i32.min) and syntax. If it were applied to every pure fn+syntax it would probably cover a great deal of what Zig is doing.

hbbio 3 days ago

Funny to see the example of RSC in that context!

Multi-stage programming and distribution with the same syntax between clients and servers has been _the_ key feature of Opa (opalang.org) 15 years back. Funny because Opa was a key inspiration for React and its JSX syntax but it took a lot of time to match the rest of the features.

AlexErrant 3 days ago

Another example of biphasic programming is parser generators with DSLs for generating parsers, e.g. Tree Sitter or Lezer.

  • samatman 2 days ago

    I wouldn't include codegen under biphasic programming, which is, according to the lede of The Fine Article:

    > characterized by languages and frameworks that enable identical syntax to express computations executed in two distinct phases or environments while maintaining consistent behavior (i.e., semantics) across phases

    Munging together strings into something which is hopefully source code is the ultimate escape valve for languages which have poor or nonexistent facilities for biphasic programming. You compile a program, it executes, it spits out a program, you compile that and run it. That isn't biphasic: it's one phase, twice.

indyjo 3 days ago

Would embedding code (which is executed by some other runtime, like SQL, shaders, compute kernels etc.) also be considered "biphasic" or "multi-stage" programming?

Svoka 3 days ago

To be honest `comptime` seems excessive. Like, if something can be calculated at compile time, it should be. Why the extra keywords for that? Rust is mostly doing it already.

  • staunton 3 days ago

    > if something can be calculated at compile time, it should be

    Often you don't actually want some things done at compile time although they could be done. It can lead to, e.g., excessive executable sizes, excessive compile times. If you've ever considered using `-ftemplate-depth` in C++, you've probably encountered such a case.

    Maybe it sounds like I'm splitting hairs and you would say "of course in such crazy cases it's not true", but if you look at C++ projects and what can be done at compile time with modern C++, you would find it's not rare at all.

    • tomjakubowski 3 days ago

      Yes. My attitude earlier in my life as a programmer was that moving as much as possible to "compile time" was a universal good. The truth is of course, like everything, there are trade offs.

      I once worked on an Elixir application whose configuration was accessed almost exclusively at compile time. This was done with the well-intended notion that saving runtime cycles was a good thing. It meant that changing any config (e.g.: "name of s3 bucket") meant recompiling the entire application. It also meant we had to wait for a full application rebuild in CI to deploy fixes for simple configuration errors. Not so super.

  • trealira 3 days ago

    One reason you might want an explicit keyword is so that it fails to compile if it can't be calculated at compile time, which is what was intended, rather than fall back to calculating it at runtime. It also seems useful as an ad-hoc method of annotating pure functions; those functions are guaranteed not to modify global variables at runtime or do I/O.

    • Svoka 3 days ago

      I guess so... This is reason for `consteval` and such.

  • samatman 2 days ago

    The comptime keyword is only needed in Zig if the code could execute at runtime. It forces compile-time evaluation, but it isn't the only way for it to happen.

    So Zig actually does what you seem to think it should do (and I agree! it's great!): if something can be calculated at compile time, it is. Only when the compiler can't statically deduce a compile-time construct is it necessary to use the `comptime` keyword: in fact, it's an error to use the keyword when the context is already comptime.

williamcotton 3 days ago

I like the term biphasic! The prior terms for this with Javascript web development were "isomorphic" or "universal". I don't think these ever really caught on.

I've been rendering the same React components on the server and browser side for close to decade and I've come across some really good patterns that I don't really see anywhere else.

Here's the architectural pattern that I use for my own personal projects. For fun I've starting writing it in F# and using Fable to compile to JS:

https://fex-template.fly.dev

A foundational element is a port of express to the browser, aptly named browser express:

https://github.com/williamcotton/browser-express

With this you write not only biphasic UI components but also route handlers. In my opinion and through lots of experience with other React frameworks this is far superior to approaches taken by the mainstream frameworks and even how the React developers expect their tool to be used. One great side effect is that the site works the same with Javascript enabled. This also means the time to interaction is immediate.

It keeps a focus on the request itself with a mock HTTP request created from click and form post events in the browser. It properly architects around middleware that processes an incoming request and outgoing response, with parallel middleware for either the browser or server runtime. It uses web and browser native concepts like links and forms to handle user input instead of doubling the state handling of the browser with controlled forms in React. I can't help but notice that React is starting to move away from controlled forms. They have finally realized that this design was a mistake.

Because the code is written in this biphasic manner and the runtime context is injected it avoids any sort of conditionals around browser or server runtime. In my opinion it is a leaky abstraction to mark a file as "use client" or "use server".

Anyways, I enjoyed the article and I plan on using this term in practice!

JamesBarney 3 days ago

As a Microsoft fanboy I have to list our their biphasic additions.

Linq. Have a a set of collection manipulation methods that could be run in c# or transformed in SQL.

Blazor. Have components that can run on the server, or in the browser, or several other rendering tactics.

z5h 3 days ago

Take a look at term_expansion and goal_expansion in the Prologs that support them.

ceving 3 days ago

> macro systems like those in C, C++, and Rust

Suggesting that the macros of C and Rust may be the same is an insane failure.

BTW: meta-programming means "code which generates code" and not "code which runs earlier than other code".