this post was submitted on 30 May 2024
5 points (100.0% liked)

Concatenative Programming

146 readers
1 users here now

Hello!

This space is for sharing news, experiences, announcements, questions, showcases, etc. regarding concatenative programming concepts and tools.

We'll also take any programming described as:


From Wikipedia:

A concatenative programming language is a point-free computer programming language in which all expressions denote functions, and the juxtaposition of expressions denotes function composition. Concatenative programming replaces function application, which is common in other programming styles, with function composition as the default way to build subroutines.

For example, a sequence of operations in an applicative language like the following:

y = foo(x)
z = bar(y)
w = baz(z)

...is written in a concatenative language as a sequence of functions:

x foo bar baz


Active Languages

Let me know if I've got any of these misplaced!

Primarily Concatenative

Concatenative-ish, Chain-y, Pipe-y, Uniform Function Call Syntax, etc.


Cheat Sheets & Tutorials

Discord

IRC

Wikis

Wikipedia Topics

Subreddits

GitHub Topics

Blogs

Practice

founded 1 year ago
MODERATORS
5
0.1 + 0.2 | Roc (rtfeldman.com)
submitted 5 months ago* (last edited 5 months ago) by Andy to c/concatenative
 

Discussion on lobsters: https://lobste.rs/s/oxjvv0/0_1_0_2

It's not that Roc only supports base-10 arithmetic. It also supports the typical base-2 floating-point numbers, because in many situations the performance benefits are absolutely worth the cost of precision loss. What sets Roc apart is its choice of default; when you write decimal literals like 0.1 or 0.2 in Roc, by default they're represented by a 128-bit fixed-point base-10 number that never loses precision, making it reasonable to use for calculations involving money.

In Roc, floats are opt-in rather than opt-out.

top 3 comments
sorted by: hot top controversial new old
[–] Andy 2 points 5 months ago (1 children)

More quoted from the post:

Roc's Dec implementation (largely the work of Brendan Hansknecht—thanks, Brendan!) is essentially represented in memory as a 128-bit integer, except one that gets rendered with a decimal point in a hardcoded position. This means addition and subtraction use the same instructions as normal integer addition and subtraction. Those run so fast, they can actually outperform addition and subtraction of 64-bit base-2 floats!

Multiplication and division are a different story. Those require splitting up the 128 bits into two different 64-bit integers, doing operations on them, and then reconstructing the 128-bit representation. (You can look at the implementation to see exactly how it works.) The end result is that multiplication is usually several times slower than 64-bit float multiplication, and performance is even worse than that for division.

Some operations, such as sine, cosine, tangent, and square root, have not yet been implemented for Dec. (If implementing any of those operations for a fixed-point base-10 representation sounds like a fun project, please get in touch! People on Roc chat always happy to help new contributors get involved.)

[–] Andy 2 points 5 months ago

Even more:

If a function takes a number—whether it's an integer, a floating-point base-2 number, or a Dec—you can always write 5 as the number you're passing in. (If it's an integer, you'll get a compiler error if you try to write 5.5, but 5.5 will be accepted for either floats or decimal numbers.)

Because of this, it's actually very rare in practice that you'll write 0.1 + 0.2 in a .roc file and have it use the default numeric type of Dec. Almost always, the type in question will end up being determined by type inference—based on how you ended up using the result of that operation.

For example, if you have a function that says it takes a Dec, and you pass in (0.1 + 0.2), the compiler will do Dec addition and that function will end up receiving 0.3 as its argument. However, if you have a function that says it takes F64 (a 64-bit base-2 floating-point number), and you write (0.1 + 0.2) as its argument, the compiler will infer that those two numbers should be added as floats, and you'll end up passing in the not-quite-0.3 number we've been talking about all along.

[–] FizzyOrange 1 points 5 months ago

Sounds crazy but apparently it uses type inference to convert them to float anyway.

Even so... the fact that 0.3 doesn't mean 0.300000000 exactly seems a lot easier to deal with than the fact that real numbers can now easily overflow.