Cure v0.22.0 :: Loose Ends
by Aleksei Matiushkin
v0.22.0 is deliberately narrow. Every item it ships was scaffolded
during the v0.20.0 AST-polish release and then deferred through
v0.21.0, because the user-visible surface needed more work than the
"Through the Segments" release could carry. This release closes those
gaps: multi-statement lambda bodies work inside argument lists,
binary comprehension generators (for <<b <- buf>>) parse without
mis-tokenisation, and trailing rest::binary segments carry a
byte_size-based refinement so subsequent binary matches type-check
without having to re-assert every invariant. The v0.21.0 "Unreleased"
first-class FSM overhaul also graduates into a shipped surface.
Multi-statement lambda bodies
The indented-block form has been the only way to write a
multi-statement lambda since v0.19.0. That works perfectly when the
lambda is at the top of a function body, but falls apart the moment
the lambda is buried inside an argument list—the lexer suppresses
newlines inside parens, so there is no newline to turn into an
:indent.
v0.22.0 introduces two explicit shapes that work anywhere:
# Brace-delimited: statements separated by `;`, final expression is
# the result. Works inside argument lists because `{ }` and `;` are
# already in the token stream.
map(xs, fn(x) -> { let y = x + 1; y + 2 })
# end-terminated: a single `end` keyword closes the body. Statement
# separator is still `;` (or newline when the lambda is not inside
# parens).
map(xs, fn(x) -> let y = x + 1; y + 2; end)
Both compile to the same {:block, meta, exprs} AST node that the
v0.19.0 indented form produces, so nothing downstream changes. The
only new piece of meta is :block_shape (:brace or :end), which
lets the Printer and algebra formatter round-trip the author's
chosen shape.
end is now a reserved keyword. No .cure file in the stdlib or
examples uses end as an identifier, but any third-party code that
does will need to rename the binding.
Binary comprehension generators
The v0.21.0 release notes called this "the parser currently
mis-tokenises <- inside <<...>> as a less-than comparison." The
root cause was that parse_bin_segment/1 parses each segment’s value
at binding power 0, which happily consumes < as a comparison
operator. v0.22.0 adds a dedicated parse_binary_generator/1 path
triggered by parse_generator_or_filter/1 when the generator starts
with :binary_open; segments inside it are parsed at binding power
42, safely above comparison.
[byte for <<byte <- "abc">>] # [97, 98, 99]
[word for <<word::16 <- buf>>] # 16-bit words
[ch for <<ch::utf8 <- text>>] # UTF-8 code points
Each generator emits a {:binary_generator, meta, [pattern, source]}
qualifier node. Cure.Compiler.Codegen.compile_comprehension/3
lowers it to Erlang’s b_generate form inside the existing :lc
comprehension, so mixed qualifier lists (list generator + binary
generator + filter) compile uniformly.
The type checker’s :comprehension branch now binds every
qualifier’s pattern variables into the body’s environment via
bind_pattern_vars/3, picking up the full v0.21.0 binary-segment
narrowing for free.
byte_size arithmetic refinements
The v0.21.0 release notes also promised, eventually:
byte_size(rest) == byte_size(scrutinee) - sum_of_preceding_sizesonce the SMT translator grows the arithmetic support.
That lands in v0.22.0 in two pieces. First, Cure.SMT.Translator
speaks byte_size/1 as an uninterpreted (Int) -> Int function.
Queries that reference byte_size prepend
(declare-fun byte_size (Int) Int) and switch to the QF_UFLIA
logic automatically; queries that don't mention byte_size stay on
QF_LIA, so unrelated refinement checks are not slowed down.
Second, Cure.Types.PatternRefinement.narrow/2 rewrites its binary-
segment branch. Every preceding segment with a byte-aligned size
contributes to a running sum; the trailing rest::binary (or
rest::bytes / rest::bitstring) segment receives a refinement:
# <<tag::8, len::16, rest::binary>>
# => rest : {x: Bitstring | byte_size(x) == byte_size(scrutinee) - 3}
When a preceding segment’s size cannot be linearised (a non-byte-
aligned unit, a dynamic size with no constant factor), the pipeline
emits a :refinement_ignored event under code E037 and rest
degrades to plain Bitstring. Runtime pattern matching still
enforces the remaining invariants.
First-class FSM handling graduates
The v0.21.0 CHANGELOG.md carried a sizeable "Unreleased" block
describing the %Cure.FSM.State{} rewrite of callback-mode FSMs:
three accepted start_link/1 init shapes, on_transition clauses
receiving the struct as their 4th argument, @notify_transitions,
on_start / on_stop, event payloads on send_event/3, and a
self-hosted Std.Fsm that exposes it all. v0.22.0 simply promotes
that block to a shipped release. Simple-mode (gen_statem) FSMs are
entirely unchanged.
By the numbers
- 3 new error-catalog codes (
E035-E037) with examples and fix guidance. - 3 new example files (
lambda_block.cure,binary_comprehension.cure,byte_size_refinement.cure). - 1114 tests pass (up from 1078; 3 doctests + 1111 tests);
mix credo --strict: 0 issues across 142 source files;mix cure.check.stdlib: 25/25;mix cure.check.examples: 44/44.
What’s next (v0.23.0)
The package-registry story has been resliced to its own release. v0.23.0 now targets:
- Remote package-registry index service.
Cure.Project.RegistryHTTP client against a read-only index protocol. - Publication signing. Ed25519 archive signing and transparency log (modelled on sigstore/rekor).
- Hex.pm cross-publishing.
cure publish --hexexports a Cure package into a Hex-compatible tarball so existing tooling can consume it.
Getting started
git clone https://github.com/am-kantox/cure-lang.git
cd cure
mix deps.get && mix test
mix escript.build
./cure version # Cure 0.22.0
The repository is at github.com/am-kantox/cure-lang. v0.21.0 moved binary handling to the surface; v0.22.0 cleans up the corners.