1
14
submitted 1 month ago by armchair_progamer to c/programming_languages

How much progress have you made since last time? What new ideas have you stumbled upon, what old ideas have you abandoned? What new projects have you started? What are you working on?

Once again, feel free to share anything you’ve been working on, old or new, simple or complex, tiny or huge, whether you want to share and discuss it, or simply brag about it - or just about anything you feel like sharing!

The monthly thread is the place for you to engage /c/programming_languages on things that you might not have wanted to put up a post for - progress, ideas, maybe even a slick new chair you built in your garage. Share your projects and thoughts on others’ ideas, and most importantly, have a great and productive month!

Also see: https://www.reddit.com/r/ProgrammingLanguages/comments/1b3fqrz/march_2024_monthly_what_are_you_working_on_thread/

2
0
submitted 1 day ago by cli345 to c/programming_languages

This is just a very naive execution model for concurrency.

What do you think about it?

3
5
submitted 3 days ago by armchair_progamer to c/programming_languages
4
10
submitted 4 days ago* (last edited 4 days ago) by armchair_progamer to c/programming_languages

Key quote IMO:

As organizations grow, one cannot depend on everyone being good at their job, or even average, if statistics are to be believed. With this I would like to talk about the scalability of a programming language, which I will define it as:

A programming language is more scalable if an engineer unfamiliar with a code base written in it produces correct code more quickly.

Scalability is often at odds with peak effectiveness, the maximum effectiveness of an engineer who is intimately familiar with the codebase, because the features driving peak effectiveness are often enabling abstractions tailored towards the specific use case at hand, like macros and support for domain-specific languages. These abstractions can make domain experts more effective, but present an additional barrier to entry to everyone else. At the extreme end of this spectrum sit code golf languages.

5
13
submitted 6 days ago by armchair_progamer to c/programming_languages

This essay says that inheritance is harmful and if possible you should "ban inheritance completely". You see these arguments a lot, as well as things like "prefer composition to inheritance". A lot of these arguments argue that in practice inheritance has problems. But they don't preclude inheritance working in another context, maybe with a better language syntax. And it doesn't explain why inheritance became so popular in the first place. I want to explore what's fundamentally challenging about inheritance and why we all use it anyway.

6
5
submitted 4 days ago* (last edited 4 days ago) by armchair_progamer to c/programming_languages

This article presents and explains SMoL Tutor. In summary, "SMoL Tutor is a web app that corrects common misconceptions about basic concepts in modern programming languages." Also see the paper, Identifying and Correcting Programming Language Behavior Misconceptions.

7
6
submitted 1 week ago by armchair_progamer to c/programming_languages

From the article.

I present the “Lambda Screen”, a way to use terms of pure lambda calculus for generating images. I also show how recursive terms (induced by fixed-point combinators) can lead to infinitely detailed fractals.

8
16
submitted 1 week ago by armchair_progamer to c/programming_languages

This is an old language (on the frontpage "designed in 2004"). The website says the latest release is 2016, but the GitHub repo has commits since last month.

It's part the Ecere SDK ("founded in 1997"), which has an IDE and set of libraries for 3D graphics, networking, and more. The site shows various programs like Chess and a media player that have been written in eC.

9
7
submitted 1 week ago* (last edited 1 week ago) by armchair_progamer to c/programming_languages

Abstract:

      In programming, protocols are everywhere. Protocols describe the pattern of interaction (or communication) between software systems, for example, between a user-space program and the kernel or between a local application and an online service. Ensuring conformance to protocols avoids a significant class of software errors. Subsequently, there has been a lot of work on verifying code against formal protocol specifications. The pervading approaches focus on distributed settings involving parallel composition of processes within a single monolithic protocol description. However we observe that, at the level of a single thread/process, modern software must often implement a number of clearly delineated protocols at the same time which become dependent on each other, e.g., a banking API and one or more authentication protocols. Rather than plugging together modular protocol-following components, the code must re-integrate multiple protocols into a single component.

      We address this concern of combining protocols via a novel notion of 'interleaving' composition for protocols described via a process algebra. User-specified, domain-specific constraints can be inserted into the individual protocols to serve as 'contact points' to guide this composition procedure, which outputs a single combined protocol that can be programmed against. Our approach allows an engineer to then program against a number of protocols that have been composed (re-integrated), reflecting the true nature of applications that must handle multiple protocols at once.

      We prove various desirable properties of the composition, including behaviour preservation: that the composed protocol implements the behaviour of both component protocols. We demonstrate our approach in the practical setting of Erlang, with a tool implementing protocol composition that both generates Erlang code from a protocol and generates a protocol from Erlang code. This tool shows that, for a range of sample protocols (including real-world examples), a modest set of constraints can be inserted to produce a small number of candidate compositions to choose from.

      As we increasingly build software interacting with many programs and subsystems, this new perspective gives a foundation for improving software quality via protocol conformance in a multi-protocol setting.

Related: session types: types which enforce protocols, so that communication that violates the protocol is ill-typed.

10
57
submitted 1 week ago by SuperFola to c/programming_languages
11
10
submitted 2 weeks ago by armchair_progamer to c/programming_languages

[term-lisp] does not support variables, and its functions are rules that describe how to replace a given term with another one.

For example, consider how the booleans are defined:

(true = Bool True)
(false = Bool False)

This means "when you see the term "true", replace it with the term "Bool True"

...

term-lisp supports first-class pattern matching. This means that you can have functions that return patterns.

For example, consider how the if expression is defined:

(if (:literal true) a b = a)
(if (:literal false) a b = b)

Here the terms true and false are automatically expanded, so the expressions above are a shorthand for:

(if (:literal (Bool True)) a b = a)
(if (:literal (Bool False)) a b = b)

...

To prevent unwanted execution, expressions in term-lisp are evaluated lazily i.e. only when they are needed.

...

What happens if we construct an expression that looks like function application, but the function being applied is not defined? In most languages, this would result in error, but in term-lisp we will create a new datatype/constructor, e.g. the expression Pair foo bar would evaluate to... Pair foo bar i.e. we will save a new object of type Pair, containing the values of foo and bar.

12
11
Fledgling Languages List (fll.presidentbeef.com)
submitted 2 weeks ago* (last edited 2 weeks ago) by armchair_progamer to c/programming_languages

This site exists to promote languages which are in development or have not yet reached a wider audience. This includes languages that do not ever intend to reach a wider audience, but are being developed for fun, learning, academdia or for no good reason at all.

See also:

13
5
Functional Shell Snake (programming.dev)
submitted 2 weeks ago* (last edited 2 weeks ago) by hardkorebob to c/programming_languages

In Oct’23 a basic Text widget with Subprocess to run shell commands allowed me to take my command line fu into a different arena because Tkinter gave me special tricks. This tool (shell functions) allows me to type up GUI apps or any Python script with less effort, for my fingers and brain.

wc newide; ksh newide | wc

513 1641 10859 newide [shell functions]

600 1959 29558 newide [python file]

The blocks of color are capital letters colored using tkinter methods (tag_add & tag_configure) with a bg and fg of the same color to make it look like a lego block, it’s all ASCII. On the right, an idle clicker game/toy made with pnk.lang, also just ASCII/UTF-8. The IDE you see in the pic also was coded using functional shell language I call pnk.lang and the original first iteration of this specification is in the legacy folder in the repo below. Just me learning how to code faster in Python but in shell but in neither at this point.

https://github.com/dislux-hapfyl/pynksh

https://www.reddit.com/r/pnk/

Don’t be put off by Ksh because Bash can also interpret it, since it’s just shell functions that print Python code. But I have plans to use an Xbox controller so I can move away from building with the keyboard at piecemeal rates. I will do this by abstracting away identifiers and all data we use as engineers into “dictionaries” to then transform it into a spatial system. I already done this first step! Using a basic grid with a maximum of 10 rows x 3 cols, indexing row[0-9] col[0-2], as you see there on the left side, and by using a letter I then categorize the functions of pnk(shell+python) so I can have 30 x 26[a-z] available slots; a00 b11 k22 and so on…by making it a visual shortcut that reduces cognitive load and typing for me.

Take a closer look at my repo without dismissing it too quickly. It could seem unnecessary but maybe someone else can see what I have made as useful and how we gonna take it to higher levels of abstraction and create a new realm for making computer applications in an abstract game/IDE of art and code. Perhaps it’s that creator effect that happens when you make something for the first time that makes me see its future utility and appeal. All this was made incrementally using my own software I built from scratch. I do have a great vision and would love to speak to anyone who is interested. I also demonstrate the utility of this small tool on youtube[link in repo]. Thanks.

14
11
submitted 3 weeks ago* (last edited 3 weeks ago) by armchair_progamer to c/programming_languages

The author wrote a tiny AST and bytecode interpreter to test which is faster (related to the paper "AST vs. Bytecode: Interpreters in the Age of Meta-Compilation").

Tl;dr; like in the paper, the AST interpreter is faster.

Keep in mind the "meta-compilation": the AST or bytecode interpreter itself runs in a JIT interpreter, in the author's tests V8 or JSC.

Github

15
11
submitted 3 weeks ago by armchair_progamer to c/programming_languages

Verified compiler specifically means that, besides the compiler itself, the authors have written an interpreter for the source language (here, PureLang) and target language (assembly). They then proved that, for any valid program P, the observable behavior (like side-effects but not quite) produced by source_interpreter(P) is identical to the observable behavior produced by target_interpreter(compiler(P)).

Note that the source interpreter and compiler can both be buggy. Or compiler and target interpreter, or source and target interpreters, or proof kernel, or language the interpreters are written in, or the CPU itself, or probably other things. See The Trusted Computing Base of the CompCert Verified Compiler.

CompCert was the first large-scale formally-verified compiler, publicly released in 2008 and still being improved today. It compiles C99 to a variety of CPU architectures. More recently, CakeML is a formally-verified compiler for a functional language, specifically a subset of Standard ML. This is harder because ML's semantics are further from assembly than C's; for example, the developers of CakeML also had to formally verify a garbage collector. And lastly, PureCake itself is a compiler for an even higher-level language: it similar to Haskell, and like Haskell, everything is lazy and purely functional, and side effects are through monads (example).

Thesis: Verified compilation of a purely functional language to a realistic machine semantics. Part 1 is about PureCake, part 2 is about converting an official ARM assembly specification into the proof language partly automatically (making "the target interpreter is buggy" less likely, although still possible).

16
9
submitted 3 weeks ago by armchair_progamer to c/programming_languages

We propose a new reactive programming language, that has been meticulously designed to be reactive-only. We start with a simple (first-order) model for reactivity, based on reactors (i.e. uninstantiated descriptions of signals and their dependencies) and deployments (i.e. instances of reactors) that consist of signals. The language does not have the notion of functions, and thus unlike other RP languages there is no lifting either. [emphasis added]

If you've ever used React hooks, you probably had at least one situation where you used them incorrectly and it led to subtle bugs. Maybe not even breaking your app, just causing it to re-render unnecessarily. React has lints for this, but they don't cover all cases.

The fundamental problem is that React function components and hooks are an Embedded Domain-Specific-Language (EDSL) in JavaScript. They have their own semantics and control flow (e.g. changing a state value re-runs the function component, but code within certain "use" hooks only runs in certain situations), but they're also manipulated with ordinary JavaScript (e.g. function components are just JavaScript functions, and state variables are just variables you can use in ordinary JavaScript expressions; the latter is what the authors mean by "lifting"). It's very hard for developers to shift their mindset from reasoning about JavaScript to reasoning about React, and they (especially newcomers) very often conflate the semantics and do something "the JavaScript way" which in React, causes a slowdown or crash.

The authors' solution is to make a fully reactive language which simply doesn't have the functionality to write non-reactive code. To me, the examples look like ordinary code, but they compile directly into "reactor graphs" vs. a standard control-flow-graph that non-trivially encodes a reactive computation. I can only assume it's much easier for a compiler manipulating the former type of graph to detect (if not simply avoid generating) problems like infinite loops and redundant computations.

Related, Facebook is working on a React Compiler. This still leaves React as an EDSL, so I don't think it can eliminate all problems in the way this can, and it has to model JavaScript and React semantics which seems very hard to do correctly. But it does address the same problem while being much more practical; the compiler detects when something is improperly done "the JavaScript way" and either re-writes it or warns the developer.

17
15
submitted 3 weeks ago* (last edited 3 weeks ago) by armchair_progamer to c/programming_languages

Rye is a high level, homoiconic dynamic programming language based on ideas from Rebol, flavored by Factor, Linux shell and Go. It's still in development, but we are focused on making it useful as soon as possible.

It's written in Go and could also be seen as Go's scripting companion as Go's libraries are very easy to integrate, and Rye can be embedded into Go programs as a scripting or a config language.

I believe that as a language becomes higher level it starts bridging the gap towards user interfaces. Rye has great emphasis on interactive use (Rye console) where we intend to also explore that.

GitHub

From Wikipedia:

A language is homoiconic if a program written in it can be manipulated as data using the language…This property is often summarized by saying that the language treats code as data.

The classic example of homoiconicity is LISP. Rye borrows a lot from LISP: there are no keywords (if, for, extends are functions), mutating functions end with !, and the homepage highlights DSLs (“dialects”) and flexibility. But not many parenthesis.

Rebol, the mentioned inspiration, is its own rabbit hole. Like Rye, it touts simplicity, homoiconicity, and support for DSLs. Unlike Rye, it’s very old (although the site’s still being maintained): check out this UI.

18
29
submitted 3 weeks ago* (last edited 3 weeks ago) by armchair_progamer to c/programming_languages

In this post I want to make this kind of simplicity more precise and talk about some reasons it's important. I propose five key ideas for simple programming languages: ready-at-hand features, fast iteration cycles, a single way of doing things, first-order reasoning principles, and simple static type systems. I discuss each of these at length below.

Aside: simplicity in languages is interesting. I'd say most popular languages, from Rust and Haskell to Python and JavaScript, are not simple. Popular PL research topics, such as linear types and effect systems, are also not simple (I suppose all the simple concepts have already been done over and over).

Making a simple language which is also practical requires a careful selection of features: powerful enough to cover all of the language's possible use-cases, but not too powerful that they encourage over-engineered or unnecessarily-clever (hard-to-understand) solutions (e.g. metaprogramming). The simplest languages tend to be DSLs with very specific use-cases, and the least simple ones tend to have so much complexity, people write simpler DSLs in them. But then, many simple DSLs become complex in aggregate, to implement and to learn...so once again, it's a balance of "which features have the broadest use-cases while remaining easy to reason about"?

19
14
submitted 4 weeks ago by armchair_progamer to c/programming_languages

Constraint programming is a general method to solve logic problems, which models the problem as a set of constraints (e.g. 0 < x, y > 10, x + 2y = 24) then uses a constraint solver to find possible solutions (x = 1 & y = 11). Constraint solving is used very often in compilers. This article focuses on compiler backends like LLVM which generate assembly from low-level IR (a process called "lowering"). It presents multiple examples of constraint solvers being used with code and goes into great detail, reflecting on the author's personal experience.

* Constraint solving is also used in the frontend, particularly in type inference (including Hindley-Milner)

20
11
submitted 1 month ago* (last edited 1 month ago) by armchair_progamer to c/programming_languages

Verse (paper/slides, language reference) is the "language for the metaverse" being developed by Epic Games and some very well-known PL researchers.

It has some very ambitious and non-traditional features: first-class types, effects, non-strict evaluation, choice (kind of like non-determinism), and transactional memory. Transactional memory means that a Verse computation (transaction) can be aborted early, and all effects produced by the transaction will be reverted.

Verse also is interoperable with C++, and part of that interop is that C++ functions called in Verse must also have their C++ effects reverted. Furthermore, this is for graphically-intensive realtime games so this rollback implementation must cause as minimal overhead as possible. That's what this article is about.

There's already some Verse code running in Fortnite and you can write Fortnite plugins in Verse today, but it's very WIP.

21
8
submitted 1 month ago by armchair_progamer to c/programming_languages

It's just one part, but the Futamura Projections] are techniques to "compile" interpreted programs using partial evaluation:

partialEval : (a : Function) -> a -- Specializes a partially-applied function: returns something with the same arity and semantics but faster
interpret prog inp -- Runs the interpreter on a program with input
  • The first futamura projection turns an interpreted program into a compiled one by "partially evaluating" (AKA "specializing") the interpreter on the source code (e.g. hard-coding things that would remain constant during the code's execution). partialEval (interpret prog) inp
  • The second futamura projection turns an interpreter into a compiler by partially-evaluating the partial evaluator on the interpreter. partialEval (partialEval interpret) prog inp
  • The third futamura projection partially-evaluates the partial-evaluator on itself, creating a "compiler compiler": a function which takes an interpreter and returns a compiler, or more precisely, takes an interpreter and returns something that takes a program and returns a compiled program, or more precisely, takes an interpreter and returns something that takes a program and returns something that takes an input and produces the same output the original program would, but faster: partialEval (partialEval partialEval) interpret prog inp

More detailed explanation. An example of this in practice is Truffle in GrallVM.

22
8
submitted 1 month ago by armchair_progamer to c/programming_languages

Darklang seems to be some "canvas coding platform" like natto.dev, which also has cloud deployment and AI integration. Although

We expect to fully open source Darklang, which is currently source available, early 2024

Regardless, this post describes some issues they faced and are attempting to solve.

23
7
submitted 1 month ago by armchair_progamer to c/programming_languages

This is similar to the earlier post Blazingly Fast™ Type Class Resolution with Tries. Both use tries for instance resolution, but Agda has more implementation details due to more flexibility in its instances (and actually has an implementation).

24
21
submitted 1 month ago by armchair_progamer to c/programming_languages

Call-by-push-value is an evaluation strategy that determines when arguments to functions are evaluated. Call-by-value is what every mainstream language does: arguments are evaluated before the function is called. Call-by-name substitutes arguments directly into a function, so they may be evaluated multiple times or not at all. For example, the following pseudocode:

function foo(n, m) {
    sum = 0
    for i in 1 to 4 {
        sum = n + sum
    }
    if false {
        print(m)
    }
    print(sum)
}

foo({print("1"); 2}, {print("3"); 4})

evaluated with Call-by-Value prints:

1
3
8

evaluated with Call-by-Name prints:

1
1
1
1
8

Call-by-push-value combines both by having two "kinds" of parameters: values which are evaluated immediately (call-by-value), and computations which are substituted (call-by-name). So the following code:

function foo(value n, computation m) {
    sum = 0
    for i in 1 to 4 {
        sum = n + sum
    }
    if false {
        print(m)
    }
    print(sum)
}

foo({print("1"); 2}, {print("3"); 4})

would print

1
8

The reason call-by-push-value may be useful is because both call-by-name and call-by-value have their advantages, especially with side-effects. Besides enabling programmers to write both traditional functions and custom loops/conditionals, CBPV is particularly useful for an IR to generate efficient code.

Currently, Scala has syntactic sugar for by-name parameters, and some languages like Kotlin and Swift make zero-argument closure syntax very simple (which does allow custom loops and conditionals, though it's debatable whether this is CBPV). Other languages like Rust and C have macros, which can emulate call-by-name, albeit not ideally (you have hygiene issues and duplicating syntax makes compilation slower). I don't know of any mainstream work on CBPV in the IR side.

25
13
Using Go as a compiler backend? (self.programming_languages)
submitted 1 month ago by cobbweb to c/programming_languages

I'm writing a simple functional language with automatic memory management. Go's simplicity seems it could be a good target for transpilation: garbage collection, decent concurrency paradigm, generally simple/flexible, errors as values. I already know Go quite well, but I have no idea about IR formats (LLVM, etc)

To be clear, using Go as a compiler backend would be a hidden implementation detail and there would be no user-level interop features. I'd like to bundle the Go compiler in my own compiler to save end-user headaches, but not sure how feasible this is. Once my language is stable enough for self-hosting, I'd roll my own backend (likely using Cranelift)

Pros

  • Can focus on my language, and defer learning about compiler backends
  • In particular, I wouldn't have to figure out automatic memory management
  • Could easily wrap Go's decent standard library, saving me from a lot of implementation grunt work
  • Would likely borrow a lot of the concurrency paradigm for my own language
  • Go's compiler is pretty speedy

Cons

  • Seems like an unconventional approach
  • Perception issues (thinking of Elm and it's kernel code controversy)
  • Reduce runtime performance tuneability (not to concerned about this TBH)
  • Runtime panics would leak the Go backend
  • Potential headaches from bundling the Go compiler (both technical and legal)
  • Not idea how tricky it would be to re-implement the concurreny stuff in my own backend

So, am I crazy for considering Go as compiler backend while I get my language off the ground?

view more: next ›

Programming Languages

983 readers
1 users here now

Hello!

This is the current Lemmy equivalent of https://www.reddit.com/r/ProgrammingLanguages/.

The content and rules are the same here as they are over there. Taken directly from the /r/ProgrammingLanguages overview:

This community is dedicated to the theory, design and implementation of programming languages.

Be nice to each other. Flame wars and rants are not welcomed. Please also put some effort into your post.

This isn't the right place to ask questions such as "What language should I use for X", "what language should I learn", and "what's your favorite language". Such questions should be posted in /c/learn_programming or /c/programming.

This is the right place for posts like the following:

See /r/ProgrammingLanguages for specific examples

Related online communities

founded 10 months ago
MODERATORS