this post was submitted on 06 Apr 2024
58 points (86.2% liked)

Programming Languages

1151 readers
1 users here now

Hello!

This is the current Lemmy equivalent of https://www.reddit.com/r/ProgrammingLanguages/.

The content and rules are the same here as they are over there. Taken directly from the /r/ProgrammingLanguages overview:

This community is dedicated to the theory, design and implementation of programming languages.

Be nice to each other. Flame wars and rants are not welcomed. Please also put some effort into your post.

This isn't the right place to ask questions such as "What language should I use for X", "what language should I learn", and "what's your favorite language". Such questions should be posted in /c/learn_programming or /c/programming.

This is the right place for posts like the following:

See /r/ProgrammingLanguages for specific examples

Related online communities

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 5 months ago (1 children)

I don’t want to infer types from my code. I’d rather infer the code from the types. Types are the spec, they are small and low in expressiveness, code is big and has infinitely more degrees of freedom than types. The bug surface area is smaller with types.

So it makes sense to use the types (simple, terse, constrained) to generate the code (big, unconstrained, longer to write, bug-prone). Inferring types from code is like building a complex machine without plans, and then using an X-ray diffractometer to extract plans from the physical object.

This is the argument.

This comes back to a perennially forgotten/rediscovered fundamental truth about coding: It is much easier to write code than read code

This is immediately followed by the next part that in any sufficiently large organization, you spend more time reading code than writing code.

Put it all together? Fractional second gains in writing that have meaningful expenses when it comes to reading aren't worth it once you're operating at any kind of scale.

If you and your buddy are making little hobby projects. If you have a 3 person dev team. If you're writing your own utility for personal use... I wouldn't expect these features to become evident at that scale.

Again, it isn't saying that it's something intrinsically wrong, it's just that there is a trade off and if you really think about it, under most professional environments it's a net negative effect on efficiency.

[–] porgamrer 3 points 5 months ago

I agree if we're talking at the granularity of function signatures, but beyond that I don't. Every language supports type inference when chaining expressions. Inference on local variables is often a way of breaking up large expressions without forcing people to repeat obvious types.

As for inferring code from types, scrub the symbol names off any production java code and see how much sense it makes. If you really go down this path you're quickly going to start wanting refinement types or dependent types. Both great research fields, but the harsh reality is that there's no evidence that either field is production ready, or that either solves problems in readability.

The best technologies for reading code are all about interactive feedback loops that allow you to query and explore. In some languages that is type-based, with features like dot-completion, go-to-definition, and being able to hover to see types and doc comments. And just knowing whether the code compiles provides useful information.

In other languages, like Python and JavaScript, that feedback loop is value-based, and in some ways far richer and more powerful, but it suffers from being unavailable in most contexts. Most importantly, the lack of error messages is not a very useful signal.

I am obviously no authority, but my honest opinion is that type inference is completely orthogonal to the questions that actually matter in code readability, so blaming it is silly.