qwertyasdef

joined 2 years ago
[–] qwertyasdef 4 points 1 year ago* (last edited 1 year ago)

Do you care about modeling the cells? If not, you could represent each row with just a number. When X plays, add 1 to all the rows that include the position they played, and when O plays, subtract 1. If any row reaches +3 or -3, that player wins.

As for rotation/reflection invariance, that seems more like a math problem than a Rust problem.

[–] qwertyasdef 2 points 1 year ago

I'm not sure this blog post makes the right comparison. Based on my admittedly limited experience, OCaml modules seem more comparable to Java classes than packages. They're both bundles of functions and data, except the module contains data types instead of being the data type itself. Classes have basically all the features of strong modules like separate compilation, signatures (interfaces), functors (generics), namespacing, access control. These examples of OCaml modules are all things that would be implemented as a class in Java.

From this perspective, rather than Java lacking strong modules, it actually has them in the form of classes. It's OCaml which lacks (or doesn't need) an additional package system on top of its modules.

[–] qwertyasdef 2 points 1 year ago (1 children)

Oh wow I wasn't expecting that at all. I wonder if he'll stream his perspective?

[–] qwertyasdef 2 points 1 year ago

My main point is that PRQL makes no distinction. If you didn’t inspect that SQL output and already know about the difference between WHERE and HAVING, you would have no idea, because in PRQL they’re both just “filter”.

Hmm, I have to disagree here. PRQL has no distinction in keyword, but it does have a distinction in where the filter goes relative to the aggregation. Given that the literal distinction being made is whether the filter happens before or after the aggregation, PRQL's position-based distinction seems a lot clearer than SQL's keyword-based distinction. Instead seeing two different keywords, remembering that one happens before the aggregation and the other after, then deducing the performance impacts from that, you just immediately see that one comes before the aggregation and the other after then deduce the performance impacts.

As far as removing arbitrary SQL features, I agree that that is it’s main advantage. However, I think either the developers or else the users of PRQL will discover that far fewer of SQL’s complexities are arbitrary than you might first assume.

That's fair, I was just thinking of things that frustrate me with SQL, but I admittedly haven't thought too hard about why things are that way.

[–] qwertyasdef 1 points 1 year ago (2 children)

What are the implications of WHERE vs HAVING? I thought the only primary difference was that one happens before the aggregation and the other happens after, and all the other implications stem from that fact. PRQL's simplification, rather than obscuring, seems like a more clear and reasonable way to express that distinction.

I don't know if PRQL supports all SQL features, but I think it could while being less complex than SQL by removing arbitrary SQL complications like different keywords for WHERE vs HAVING, only being able to use column aliases in certain places, needing to recompute a transformation to use it in multiple clauses, not forcing queries to be in SELECT... FROM... WHERE... order, etc.

[–] qwertyasdef 2 points 1 year ago (5 children)

Why would you need to know the eccentricities of SQL? Shouldn't it be enough to just know PRQL? The generated SQL should have the same semantics as the PRQL source, unless the transpiler is buggy.

[–] qwertyasdef 1 points 1 year ago

Agreed, smartness is about what it can do, not how it works. As an analogy, if a chess bot could explore the entire game tree hundreds of moves ahead, it would be pretty damn smart (easily the best in the world, probably strong enough to solve chess) despite just being dumb minmax plus absurd amounts of computing power.

The fact that ChatGPT works by predicting the most likely next word isn't relevant to its smartness except as far as its mechanism limits its outputs. And predicting the most likely next word has proven far less limiting than I expected, so even though I can think of lots of reasons why it will never scale to true intelligence, how could I be confident that those are real limits and not just me being mistaken yet again?

[–] qwertyasdef 8 points 1 year ago (4 children)

It's a Substack thing, not added by the author

[–] qwertyasdef 15 points 1 year ago (3 children)

Ask it a question about basketball. It looks through all documents it can find about basketball...

I get that this is a simplified explanation but want to add that this part can be misleading. The model doesn't contain the original documents and doesn't have internet access to look up the documents (though that can be added as an extra feature, but even then it's used more as a source to show humans than something for the model to learn from on the fly). The actual word associations are all learned during training, and during inference it just uses the stored weights. One implication of this is that the model doesn't know about anything that happened after its training data was collected.

[–] qwertyasdef 1 points 1 year ago (1 children)

Oh shit that sounds useful. I just did a project where I implemented a custom stream class to chain together calls to requests and beautifulsoup.

[–] qwertyasdef 2 points 1 year ago

Thanks for the embed preview trick. I didn't mind needing a Spotify account but then it refused to play it in the browser and wanted me to download their app instead, which was too much for something I have no intention of ever using again.

[–] qwertyasdef 1 points 1 year ago

Also Vector2f instead of Vector3f for the cross product example. I'll give them the benefit of the doubt that that's a typo instead of them not knowing what a cross product is.

view more: ‹ prev next ›