Does this have practical advantages over JIT runtime optimisation used in e.g the JVM? My assumption is that maybe you can generate optimised code perhaps without as much analytical overhead of the JVM or more specific optimisations (so controlled by the programmer) but honestly the details of how this might compare in practice are a bit over my head.
> For example, instead of a power function that uses a loop, you could generate specialized code like x * x * x * x * x directly. This eliminates runtime overhead and creates highly optimized code.
This is misguided. For decennia now, there is no reason to assume that hand-unrolled code is faster than a for-loop. Compilers optimize this stuff, and they do this even better than mindlessly multiplying x by itself. For example, raising x to the power 6 only needs 3 multiplications, see for example: https://godbolt.org/z/Edz4jjqvv
While there are definitely use cases for meta-programming, optimization is not one of them.
Optimization is absolutely one of them, once you start dealing with higher order programs or programs with sophisticated control flow. Compiler optimizations are not enough to infer the Futamura projections in all cases, for instance.
> For example, instead of a power function that uses a loop, you could generate specialized code like x * x * x * x * x directly. This eliminates runtime overhead and creates highly optimized code.
Could anyone explain to me how this is different from templates or parameter pack expansion in C++? I can see the constexpr-ness here is encoded in the type system and appears more composable, but I am not sure if I am missing the point.
I looked at the paper but I can't find anything related to C++.
> Could anyone explain to me how this is different from templates or parameter pack expansion in C++?
I don't think it's any different.
> I can see the constexpr-ness here is encoded in the type system
I also see they introduce new constructs like let$, so it's not just a type system thing.
> I looked at the paper but I can't find anything related to C++.
I don't think the author needs to compare their code to C++. That said, it looks to me like it is similar to the upcoming C++26's reflection capabilities.
Does this have practical advantages over JIT runtime optimisation used in e.g the JVM? My assumption is that maybe you can generate optimised code perhaps without as much analytical overhead of the JVM or more specific optimisations (so controlled by the programmer) but honestly the details of how this might compare in practice are a bit over my head.
> For example, instead of a power function that uses a loop, you could generate specialized code like x * x * x * x * x directly. This eliminates runtime overhead and creates highly optimized code.
This is misguided. For decennia now, there is no reason to assume that hand-unrolled code is faster than a for-loop. Compilers optimize this stuff, and they do this even better than mindlessly multiplying x by itself. For example, raising x to the power 6 only needs 3 multiplications, see for example: https://godbolt.org/z/Edz4jjqvv
While there are definitely use cases for meta-programming, optimization is not one of them.
Optimization is absolutely one of them, once you start dealing with higher order programs or programs with sophisticated control flow. Compiler optimizations are not enough to infer the Futamura projections in all cases, for instance.
https://okmij.org/ftp/tagless-final/index.html#tagless-final
This is fascinating. I could see it being very useful for writing SIMD abstraction layers (like Highway or SIMDe) without so much of the cruft.
> For example, instead of a power function that uses a loop, you could generate specialized code like x * x * x * x * x directly. This eliminates runtime overhead and creates highly optimized code.
Could anyone explain to me how this is different from templates or parameter pack expansion in C++? I can see the constexpr-ness here is encoded in the type system and appears more composable, but I am not sure if I am missing the point.
I looked at the paper but I can't find anything related to C++.
> Could anyone explain to me how this is different from templates or parameter pack expansion in C++?
I don't think it's any different.
> I can see the constexpr-ness here is encoded in the type system
I also see they introduce new constructs like let$, so it's not just a type system thing.
> I looked at the paper but I can't find anything related to C++.
I don't think the author needs to compare their code to C++. That said, it looks to me like it is similar to the upcoming C++26's reflection capabilities.
Typically, multistage languages permit program generation at any stage, including runtime. So that would be different than C++.
How is this different from a syntactic macro?
Two big differences: