bitfucker

joined 8 months ago
[–] bitfucker 1 points 4 months ago (2 children)
[–] bitfucker 3 points 4 months ago

Can you imagine the anti matter colliding with the glass of the ISS? Must've been a nightmare

[–] bitfucker 3 points 4 months ago

Hey, don't kinkshame man. Just look at those stupid sexy v-bind

[–] bitfucker 9 points 4 months ago (1 children)

The app has offline capabilities and to save articles on a named list. I use it as a reference when forgetting something or to save the list type article as a starting point when researching a software to use. Or just generally a reading material when on the go (yes, I find reading wikipedia articles entertaining)

[–] bitfucker 4 points 4 months ago (2 children)

My man, I think I have over a hundred tabs and saved wikipedia articles alone that I always refer to when needed. The app works great for me

[–] bitfucker 2 points 4 months ago (1 children)

Yeah, that's my point ma dude. The current LLM tasks are ill suited for programming, the only reason it works is sheer coincidence (alright, maybe not sheer coincidence, I know its all statistics and so on). The better approach to make LLM for programming is a model that can transform/"translate" a natural language that humans use to AST, the language that computers use but still close to human language. But the problem is that to do such tasks, LLM needs to actually have an understanding of concepts from the natural language which is debatable at best.

[–] bitfucker 2 points 4 months ago (3 children)

Yeah, for sure since programming is also a language. But IMHO, for a machine learning model the best way to approach it is not as a natural language but rather as its AST/machine representation and not the text token. That way the model not only understands the token pattern but also the structure since most programming languages are well defined.

[–] bitfucker 11 points 4 months ago (5 children)

I always said this in many forums yet people can't accept that the best use case of LLM is translation. Even for language such as japanese. There is a limit for sure, but so does human translation without adding many more texts to explain the nuance in the translation. At that point an essay is needed to dissect out the entire meaning of something and not just translation.

[–] bitfucker 5 points 4 months ago

So does OSM data. Everyone can download the whole earth but to serve it and provide routing/path planning at scale takes a whole other skill and resources. It's a good thing that they are willing to open source their model in the first place.

[–] bitfucker 2 points 4 months ago (2 children)

While I get the sentiment, I can't help but picture the complaining customer "I PAY your wages" from that statement.

[–] bitfucker 3 points 4 months ago

Dude, his point is that if you did not implement partial rendering on a big file, the browser will have to work extra hard to render that shit. Not to mention if you add any interactivity on the client side like variable highlighting that needs to be context aware for each language... that basically turns your browser into VSCode, at that point just launch the browser based vscode using the . shortcut.

It's not a matter of the server side of things but rather on the client side of things.

view more: ‹ prev next ›