3
submitted 1 day ago by armchair_progamer to c/formal_methods

From README:

At the heart of coq-of-rust is the translation of Rust programs to the proof system Coq 🐓. Once some Rust code is translated to Coq, it can then be verified using standard proof techniques.

Here is an example of a Rust function:

fn add_one(x: u32) -> u32 {
    x + 1
}

Running coq-of-rust, it translates in Coq to:

Definition add_one (τ : list Ty.t) (α : list Value.t) : M :=
  match τ, α with
  | [], [ x ] =>
    ltac:(M.monadic
      (let x := M.alloc (| x |) in
      BinOp.Panic.add (| M.read (| x |), Value.Integer Integer.U32 1 |)))
  | _, _ => M.impossible
  end.

Functions such as BinOp.Panic.add are part of the standard library for Rust in Coq that we provide. We can then express and verify specifications on the code in Coq.

Blog:

The same group also recently started coq-of-python, which is the same thing but for Python:

Alternate Rust-to-_ translators:

Other Rust formal verification projects:

  • Creusot: Formally verifies Rust code with annotations representing preconditions/postconditions/assertions, by translating the MIR to Why3 then solving.
  • Kani: Formally verifies Rust code with annotations, by using model checking powered by CBMC.
  • Verus: Formally verifies a Rust DSL with syntax extensions to represent preconditions/postconditions/assertions, by using the Z3 SMT solver.
7
submitted 1 day ago* (last edited 1 day ago) by armchair_progamer to c/programming_languages

Another post from the creator of Vale, the last one was "Borrow checking, RC, GC, and the Eleven (!) Other Memory Safety Approaches".

This discusses Vale's higher RAII, a feature implemented using linear types. Vale has linear types as opposed to Rust's affine types: affine types prevent using a value after it's "moved", but linear types also prevent not moving a value (i.e. cannot just let it go out of scope). Additionally in Vale, one can define a set of functions that destroy a type, and in order to destroy a value of that type, one must call a function from that set; that is what Vale calls "higher RAII".

This part of the linked post contains seven examples where Vale's higher RAII enforce invariants that Rust's affine types cannot. In a language like C, you can "forget" to use a value and it will leak. In a language like Rust, the value will be properly deallocated, but you can still have "forgotten" about it, e.g. perhaps the value represents a transaction that you meant to commit (one of the seven examples). In Vale, you can't forget about a value as long as it has a linear type, because doing so will raise a compile-time error.

The rest of the post further discusses higher RAII, elaborating on what it is and why its useful, explaining why it's more general than other languages' defer and how it could be integrated into other languages, and more.

[-] armchair_progamer 26 points 3 days ago* (last edited 2 days ago)

C++’s mascot is an obese sick rat with a missing foot*, because it has 1000+ line compiler errors (the stress makes you overeat and damages your immune system) and footguns.

EDIT: Source (I didn't make up the C++ part)

28
submitted 3 days ago* (last edited 3 days ago) by armchair_progamer to c/programming_languages

From the site:

We have entered a world in which we need to do more with less. If you, like us, have frowned at the difficulty and inefficiency of creating software, and wondered if there is a better way, Meta is for you. It is a descendant of the acclaimed REBOL and Logo computer languages. Logo was designed in academia to teach programming in a simple yet powerful way. It is widely used in education and influenced many newer systems and languages. REBOL extended this concept to professional programming. It was invented by Carl Sassenrath, the chief software designer of the famous Amiga computer, the first consumer multimedia system. Meta takes the next step by combining this design with the virtues of the C programming language, which has been the standard for software interoperability and performance for half a century.

meta-lang-examples (GitHub)

Try online ("console")

8
submitted 4 days ago by armchair_progamer to c/formal_methods

This blog post is about Agda Core, a project to build a a formally specified core language for (a reasonable subset of) Agda together with a reference implementation of a type checker for it. If you have been following my work, you might know that I have been working on this project on and off for the past few years, yet the current version is still very much a work in progress. Hence, in the second part of this post, I will dive into a few examples of some of the more unexpected and interesting obstacles we encountered while trying to turn Agda Core from a dream into reality.

15
Exploring the c4... compiler? (registerspill.thorstenball.com)
submitted 4 days ago* (last edited 4 days ago) by armchair_progamer to c/programming_languages

c4 ("C in four functions"; GitHub)

I remember coming across c4 when it was released ten years ago. It got me excited: hey, C in four functions, that means it’s easy to understand right?

That excitement turned into “oh, I see” as soon as I scrolled through the code. c4 is dense, barely commented, and, frankly, strange. It’s unlike anything else I had come across in compiler land.

After reading through the code this week here are the other adjectives I would add to the list: clever, tricky, fascinating, cool. It’s a compiler, it’s a VM, it’s an interpreter, it’s a parser, it’s art, it’s trickshot programming.

Not to be confused with C3, "a programming language that builds on the syntax and semantics of the C language, with the goal of evolving it while still retaining familiarity for C programmers".

[-] armchair_progamer 6 points 4 days ago* (last edited 4 days ago)

I could understand method = associated function whose first parameter is named self, so it can be called like self.foo(…). This would mean functions like Vec::new aren’t methods. But the author’s requirement also excludes functions that take generic arguments like Extend::extend.

However, even the above definition gives old terminology new meaning. In traditionally OOP languages, all functions in a class are considered methods, those only callable from an instance are “instance methods”, while the others are “static methods”. So translating OOP terminology into Rust, all associated functions are still considered methods, and those with/without method call syntax are instance/static methods.

Unfortunately I think that some people misuse “method” to only refer to “instance method”, even in the OOP languages, so to be 100% unambiguous the terms have to be:

  • Associated function: function in an impl block.
  • Static method: associated function whose first argument isn’t self (even if it takes Self under a different name, like Box::leak).
  • Instance method: associated function whose first argument is self, so it can be called like self.foo(…).
  • Object-safe method: a method callable from a trait object.
8
submitted 5 days ago by armchair_progamer to c/rust
2
submitted 6 days ago by armchair_progamer to c/formal_methods

Context: Dafny is a programming language with a built-in SMT solver that can encode formally-checked statements (e.g. preconditions, postconditions, assertions).

Dafny has features of most programming languages; for example, covariant/contravariant/"non-variant" (invariant) type parameters. But in order to keep said verification sound and decidable in reasonable time, these features have extra restrictions and other complexities.

This article explains that there are 2 more "variance modes", because the of "covariant" and "invariant" variances may be "cardinality-preserving" or "not cardinality-preserving".

// Most "traditional" languages have 3 variances
type Foo<
    +CovariantParameter,
    NonVariantParameter,
    -ContravariantParameter>
// Dafny has 5
type Bar<
    +CovariantCardinalityPreservingParameter, 
    NonVariantCardinalityPreservingParameter,
    -ContravariantParameter, 
    *CovariantNonCardinalityPreservingParameter, 
    !NonVariantNonCardinalityPreservingParameter>
10
submitted 6 days ago by armchair_progamer to c/programming_languages

Implementing first class functions in a bytecode interpreter is trivial.

But how do compilers that generate machine code (or lower to C, or SSA) implement higher order functions? Back in 2021, I found an answer when contributing closures to the Pallene compiler.

Today I was researching something loosely related, and found yet another neat trick called defunctionalization in this paper.

Defunctionalization is a transform -- a way to re-write the original program without using higher order functions such that it can be trivially compiled to a flat IR in subsequent compiler passes.

The paper uses an OCaml example, and I'll be porting the same program to Haskell. Our implementation will assume that the language being compiled supports GADTs, though it's certainly possible to defunctionalize without them.

16
submitted 1 week ago by armchair_progamer to c/programming_languages

For the last year and a half, I and my recently-added collaborator Jane Losare-Lusby have been working in secret on a safe systems language that could be learned about as quickly as one can learn Go. I think we might have something worth exploring.

June (programming language) GitHub

18
submitted 1 week ago* (last edited 1 week ago) by armchair_progamer to c/programming_languages

From the homepage:

Thinking in services

Jolie crystallises the programming concepts of service-oriented computing as linguistic constructs. The basic building blocks of software are not objects or functions, but rather services that can be relocated and replicated as needed. A composition of services is a service.

Tailored for microservices and APIs

Jolie is a contract-first programming language, which puts API design at the forefront. It supports both synchronous and asynchronous communication. Data models are defined by types that support refinement [...], and DTO (Data Transfer Objects) transformations are transparently managed by the interpreter.

From vision:

Our vision is to design a programming language with a different trade-off: instead of optimising for computation, the aim of Jolie is to offer native abstractions for the creation and composition of services.

By shifting the focus on services from libraries and frameworks to the programming language, we can change significantly how programmers manage their knowledge regarding service programming. The key idea is that the important abstractions for service programming should be crystallised in the programming language, translating to an easier learning curve for new service developers and less knowledge to be managed.

4
submitted 1 week ago* (last edited 1 week ago) by armchair_progamer to c/programming_languages

This was a conference at MIT that was held 2 days ago. The presentations are all recorded here, and there are summaries and links to the presented papers on the main site.

The mission of the MIT PL Review is to highlight recent developments that we believe have significant potential to shape the future direction of PL research and/or industry practice. We aim to select papers that may substantially transform the PL community and beyond, with a focus on emerging trends rather than established lines of research. We favor papers whose contributions are broadly accessible (and likely to be appreciated) across the PL community, but we do not limit the papers to those published at PL venues. Our selection process is not meant to provide an objective evaluation of works but rather to highlight and celebrate works that resonated with our committee members.

See also: PL Review 2023

7
submitted 1 week ago* (last edited 1 week ago) by armchair_progamer to c/programming_languages

Algebraic Subtyping is a type system devised by Stephen Dolan in 2016, in his dissertation. It extends Hindley-Milner with subtyping, in a way that preserves decidability and principality. Over the past few years, I have implemented Algebraic Subtyping in my language Pinafore (omitting record types). Pinafore is, as far as I know, currently the only language to implement this type system besides Dolan's own MLsub prototype.

See also: Pinafore and Sixteen Unusual Things about Pinafore

[-] armchair_progamer 9 points 3 weeks ago* (last edited 3 weeks ago)

Java the language, in human form.

[-] armchair_progamer 51 points 1 month ago* (last edited 1 month ago)
public class AbstractBeanVisitorStrategyFactoryBuilderIteratorAdapterProviderObserverGeneratorDecorator {
    // boilerplate goes here
}
[-] armchair_progamer 8 points 2 months ago* (last edited 2 months ago)

I believe the answer is yes, except that we’re talking about languages with currying, and those can’t represent a zero argument function without the “computation” kind (remember: all functions are Arg -> Ret, and a multi-argument function is just Arg1 -> (Arg2 -> Ret)). In the linked article, all functions are in fact “computations” (the two variants of CompType are Thunk ValType and Fun ValType CompType). The author also describes computations as “a way to add side-effects to values”, and the equivalent in an imperative language to “a value which produces side-effects when read” is either a zero-argument function (getXYZ()), or a “getter” which is just syntax sugar for a zero-argument function.

The other reason may be that it’s easier in an IR to represent computations as intrinsic types vs. zero-argument closures. Except if all functions are computations, then your “computation” type is already your closure type. So the difference is again only if you’re writing an IR for a language with currying: without CBPV you could just represent closures as things that take one argument, but CBPV permits zero-argument closures.

[-] armchair_progamer 6 points 2 months ago

Go as a backend language isn’t super unusual, there’s at least one other project (https://borgo-lang.github.io) which chosen it. And there are many languages which compile to JavaScript or C, but Go strikes a balance between being faster than JavaScript but having memory management vs. C.

I don’t think panics revealing the Go backend are much of an issue, because true “panics” that aren’t handled by the language itself are always bad. If you compile to LLVM, you must implement your own debug symbols to get nice-looking stack traces and line-by-line debugging like C and Rust, otherwise debugging is impossible and crashes show you raw assembly. Even in Java or JavaScript, core dumps are hard to debug, ugly, and leak internal details; the reason these languages have nice exceptions, is because they implement exceptions and detect errors on their own before they become “panics”, so that when a program crashes in java (like tries to dereference null) it doesn’t crash the JVM. Golang’s backtrace will probably be much nicer than the default of C or LLVM, and you may be able to implement a system like Java which catches most errors and gives your own stacktrace beforehand.

Elm’s kernel controversy is also something completely different. The problem with Elm is that the language maintainers explicitly prevented people from writing FFI to/from JavaScript except in the maintainers’ own packages, after allowing this feature for a while, so many old packages broke and were unfixable. And there were more issues: the language itself was very limited (meaning JS FFI was essential) and the maintainers’ responses were concerning (see “Why I’m leaving Elm”). Even Rust has features that are only accessible to the standard library and compiler (“nightly”), but they have a mechanism to let you use them if you really want, and none of them are essential like Elm-to-JS FFI, so most people don’t care. Basically, as long as you don’t become very popular and make a massively inconvenient, backwards-incompatible change for purely design reasons, you won’t have this issue: it’s not even “you have to implement Go FFI”, not even “if you do implement Go FFI, don’t restrict it to your own code”, it’s “don’t implement Go FFI and allow it everywhere, become very popular, then suddenly restrict it to your own code with no decent alternatives”.

[-] armchair_progamer 23 points 6 months ago

It's funny because, I'm probably the minority, but I strongly prefer JetBrains IDEs.

Which ironically are much more "walled gardens": closed-source and subscription-based, with only a limited subset of parts and plugins open-source. But JetBrains has a good track record of not enshittifying and, because you actually pay for their product, they can make a profitable business off not doing so.

[-] armchair_progamer 9 points 7 months ago* (last edited 7 months ago)

Rust isn't good for fast iteration (Python with GPU-accelarated is still the winner here). Mojo could indeed turn out to be the next big thing for AI programming. But there are a few issues:

  • The #1 thing IMO, Modular still hasn't released the source for the compiler. Nobody uses a programming language without an open-source compiler! (Except MATLAB/Mathematica but that has its own controversies and issues. Seriously, what's the next highest-ranking language on Stack Overflow rankings with only closed-source compilers/interpreters? Is there even any?)

    • Even if Mojo is successful, the VC funding combined with the compiler being closed-source mean that the developers may try to extract money from the users. e.g., by never releasing the source and charging people for using Mojo or for extra features and optimizations. Because VCs don't just give money away. There's a reason nearly every popular language is open-source, and it's not just the goodness of people's hearts...not just individuals, but big companies don't want to pay to use a language, nor risk having to pay when their supplier changes the terms.
  • Modular/Mojo has lots of bold claims ("programming language for all of AI", "the usability of Python with the performance of C") without much to show for them. Yes, Mojo got 35,000 speedup from Python on a small benchmark; but writing a prototype language which achieves massive speedups without compromising readability on small benchmark, is considerably easier than doing so for a large, real-world ML program. It's not that I expect Mojo to already be used in real-world programs, I don't, I get that it's in the early-stage. It's that Modular really shouldn't be calling it the "programming language for all of AI" when so far, none of today's AI is written in Mojo.

    • What makes the Mojo developers think they can make something that others can't? The field of programming languages is dominated by very smart, very talented people. So many, it's hard to imagine Mojo developing anything which others won't be able to replicate in a few months of dedicated coding. And if Mojo doesn't open-source their compiler, those others will do so, both with funding from competing companies and universities and in their spare time.

With all that being said, I have decent hopes for Mojo. I can imagine that the bold claims and toy benchmarks are just a way to get VC funding, and the actual developers have realistic goals and expectations. And I do predict that Mojo will succeed in its own niche, as a language augmenting Python to add high-performance computing; and since it's just augmenting Python, it probably will do this within only one or two years, using Python's decades of development and tooling to be something which can be used in production. What I don't expect, is for it to be truly groundbreaking and above the competition in a way that the hype sometimes paints it as.

And if they do succeed, they need to open-source the compiler, or they will have some competitor which I will be supporting.

[-] armchair_progamer 62 points 8 months ago* (last edited 8 months ago)

I’m not involved in piracy/DRM/gamedev but I really doubt they’ll track cracked installs and if they do, actually get indie devs to pay.

Because what’s stopping one person from “cracking” a game, then “installing” it 1,000,000 times? Whatever metric they use to track installs has to prevent abuse like this, or you’re giving random devs (of games that aren’t even popular) stupidly high bills.

When devs see more installs than purchases, they’ll dispute and claim Unity’s numbers are artificially inflated. Which is a big challenge for Unity’s massive legal team, because in the above scenario they really are. Even if Unity successfully defends the extra installs in court, it would be terrible publicity to say “well, if someone manages to install your game 1,000 times without buying it 1,000 times you’re still responsible”. Whatever negative publicity Unity already has for merely charging for installs pales in comparison, and this would actually get most devs to stop using Unity, because nobody will risk going into debt or unexpectedly losing a huge chunk of revenue for a game engine.

So, the only reasonable metric Unity has to track installs is whatever metric is used to track purchases, because if someone purchases the game 1,000,000 times and installs it, no issue, good for the dev. I just don’t see any other way which prevents easy abuse; even if it’s tied to the DRM, if there’s a way to crack the DRM but not remove the install counter, some troll is going to do it and fake absurd amounts of extra installs.

[-] armchair_progamer 7 points 10 months ago
view more: next ›

armchair_progamer

joined 11 months ago
MODERATOR OF