📕 subnode [[@jakeisnt/memory safety]] in 📚 node [[memory-safety]]

Ideas and articles on memory safety.

We can design around it explicitly or implicitly in many ways, but software developers must always be aware of the memory their system uses.

Systems like Forth are interesting because they don't just abstract over the stack machines - they are the stack machines. This means they give the programmer direct, immediate knowledge of how much memory they're using at a given time.

Type systems that encapsulate memory are interesting, but by comparison feel like a bit of a hack: we have some system A with some semantics, then we graft system A' over it by making some assumptions about system A, then we enforce rules of system A' by enforcing some system B that feels unrelated to the domain A and the language for the domain A', but that enables optimisations and these things. I'm not entirely sure how to express the feeling of fighting both against the type system's enforcement of memory and against the language itself, but it deters experiments - and play is the point.

Safety in Non-Memory-Safe Languages is interesting; it investigates different proposed solutions investigated when abstracting over programming languages. To me, the design of a good solution has a lot to do with how developers think about what they're using - we want to give developers as much information at the top level as possible while allowing them to choose to focus on some things but not others at specific times (ideas vs. hardened systems vs. optimisations, while giving the programmer all the information about their ideas, their interfaces and how their code compiles and interfaces with the GC). GC obscures, and optimisation engineers for GC'd languages are tweaking compiler flags, memorizing the bytecode generated by specific functions, or rolling their own systems and toolchains to provide new performance profiles for languages.

But becoming an expert on how a programming language is written to optimise it feels like a violation of the abstraction boundary the language was created to erect in the first place!

Mmapped memory management, though idiomatic on Unix systems, also feels like a bit of a hack - it doesn't match the memory model of developers. Nobody wants to think about how much memory operation X occupies, allocate space for it, then execute the operation - they want to first a ccomplish the operation however they can, then revisit their work and apply optimisation constraints if we determine that this point in the code is the bottleneck.

Generational References are new in Vale and seem to feel both safer and more optimal than current solutions. Absolutely worth learning more about. The fact that they're strongly considering branch prediction optimisations here is a great idea.

Functional Programming and Reference Counting : ProgrammingLanguages addresses functional programming reference counting techniques in lean; specifically, optimizing theorem proving.

📖 stoas
⥱ context