RonSijm

joined 1 year ago
[–] RonSijm 6 points 8 months ago

And the problem with startup is poor pay, with too much skill requirement.

Yea, that's the problem with startups, they're poor, so by their logic "we only have money for 1 person" - "so if we hire a full stack that does everything, that's cheapest." - "What is the cheapest dev? An intern / junior." - "So what if we get a junior full stack :bigbrain: "

And then they create a vacancy for a CEO that can build their entire start-up and label it a junior-full-stack

[–] RonSijm 9 points 8 months ago* (last edited 8 months ago) (2 children)

Almost like as if people are looking to hire a one-man army to handle the entire department.

Well yea, that's usually the point of full stack. If you want to do something like that, you probably want to work a smaller scale company... Like if you're in a team of 5 people, the situation arises of "Oh sysop thing needs to be done, who to ask? I guess @RonSijm (backend dev) is close enough..."

So to have a junior full stack is pretty counterintuitive. Otherwise the situation arises of "Oh xyz needs to be done, who to ask? - Well be have a dedicated senior backend engineer, a dedicated senior front-end engineer, dedicated senior sysops.... 'Oh let me ask @[email protected], this junior full-stack'" - yea no.

Why are you aiming to be an intern/early-career full-stack engineer? The only kinda company I can think of where something like that would be something with barely any IT, where you're just the "jack of all trades" goto guy for IT stuff - so that you can be the one-man army that does everything

So honestly I'd focus on one area first - backend, frontend, dev/sys-ops - especially as you're mentioning

I’ve wasted most of my time worrying about the stack

Yea that gets even worse when you have worry about the entire stack, and work with an entire stack of components you're not really familiar with. If you're at least somewhat senior in one part - lets say backend - at least you're in a position of "Ok, I have a backend that I'm comfortable about" - "Now lets see if I can make a frontend for it" - or - "Lets see if I can manage to dockerize this, and host it somewhere."

And if you know the fundamentals of one stack-part first (data-structures, design patterns, best practices) - you can apply that knowledge to other areas and expand from there

[–] RonSijm 1 points 8 months ago

Do you mean their code is already setup with some kind of output to terminal that you can use to add a unit test into as well?

I don’t even recall what I was messing with awhile back, I think it was Python, but adding a simple print test didn’t work. I have no idea how they were redirecting print(), but that was a wall I didn’t get past at the time.

Yea, probably not every language has a concept of unittests, but basically test code.

Like if you have a calculator, there would be a test (outside of the real project) of like
If Calculator.Calculate(2 + 2) then assert outcome = 4

That way - if lets say the calculator only does + operations - you could still copy that test line, and create a new test of
If Calculator.Calculate(5 * 5) then assert outcome = 25

Your test will fail initially, but you can just run through it in a debugger, step into the code, figure out where it's most appropriate to add a * operator function, and then implement it, and see your test success.

Other benefit of that is that if you submit your change as PR, with the test the repo maintainer doesn't have to determine whether your code actually works just by looks, or actually running the calculator, your test proves you've added something useful that works (and you didn't break the existing tests)

That stuff seems like a language on top of a language for me, and when it errors I get really lost.

If you're just programming for yourself without the intent to submit it for a PR, you can just throw away the linter file. But I mentioned it was good to have in a project, because if there were multiple people working on it, all with their own style, the code can become a mess quite fast

I get sick of something that annoys me and want to go in and fix the issue despite being completely unqualified, but naive enough to try.

Well, I mean, that's basically all the things right? You start completely unqualified, mess around for a while, and after a while you're them more-qualified next time...

With stuff like Marlin, I seem to like the hardware side of things.

Just messing around with stuff you like is a good way to learn - though in my experience doing anything with hardware is way more difficult than just plain software. If you have to interface with hardware its very often pretty obscure stuff, like sending the correct hardware instructions to a driver, or to just "hardware pins" even... Like trying to start modifying a driver as a kind of starter project doesn't sound like something I'd recommend

[–] RonSijm 0 points 8 months ago

You're getting downvoted, but it's not the worst idea to ask it in the general sense:

Q: is which cases is denormalization of sql tables a good idea? especially PostgreSQL?

A:

Denormalization of SQL tables, including PostgreSQL, might be a good idea in the following cases:

  1. Read-Heavy Workloads: When your system mostly reads data from tables rather than writing or updating them, denormalization might enhance the performance of the system. It does this by minimizing the number of joins required to fetch the data.
  2. Performance Optimization: Denormalization can speed up data retrieval times by storing related data in a single table, reducing the need for complex, time-consuming joins.
  3. Complex Queries: If the database is dealing with extremely complicated queries, then making these queries simple, clear, and less complicated can be managed by denormalizing the tables.
  4. Reporting and Analytics: In a reporting database or data warehouse, denormalization helps in faster data retrieval and simplifies queries. It can also make it easier to process aggregated data and generate reports.
  5. Data Redundancy: To increase data redundancy, you might consider denormalization. It helps in data recovery as well.
  6. Schema Simplicity: Denormalization can also simplify the application code by reducing the number of tables that need to be addressed.

Remember, with denormalization comes redundancies and anomalies, more complex updates/inserts/deletes due to redundant data, along with increased storage costs due to redundancy. So the decision to denormalize should be made carefully considering all these aspects. It's often a balancing act between improved read performance versus drawbacks like increased storage requirements and potential data inconsistencies.

[–] RonSijm 1 points 8 months ago (4 children)

Do you have more context? Are you actually trying to use apple_pay, or it that just an irrelevant error you're not expecting?

When you say "everything work flawlessly under Firefox" - Did you actually go through the entire process, and eventually received your payment? Or do you just "don't get any errors" there? Cause it would seem kinda weird as well if the Stripe dashboard says apple_pay is not enabled, but apple_pay does work...

Are you targetting the same Stripe account, and both in the same mode? "Test Mode" vs "Live Mode" - in both browsers? "Test Mode" might complain less about permissions or something - Just making sure

[–] RonSijm 10 points 8 months ago* (last edited 8 months ago) (3 children)

Generally mostly by cyclomatic complexity:

  • How big are the methods overall

  • Do methods have a somewhat single responsibility

  • How is the structure, is everything inner-connected and calling each other, or are there some levels of orchestration?

  • Do they have any basic unittests, so that if I want to add anything, I can copypaste some test with an entrypoint close to my modifation to see how things are going

  • Bonus: they actually have linter configuration in their project, and consistent commonly used style guidelines

If the code-structure itself is good, but the formatting is bad, I can generally just run the code though a linter that fixes all the formatting. That makes it easier to use, but probably not something I'd actually contribute PRs to

How do you learn to spot these situations before diving down the rabbit hole? Or, to put it another way, what advice would you give yourself at this stage of the learning curve?

Probably some kind of metric of "If I open this code in an IDE, and add my modification, how long will it take before I can find a suitable entrypoint, and how long before I can test my changes" - if it's like half a day of debugging and diagnostics before I even can get started trying to change anything, it's seems a bit tedious

Edit: Though also, how much time is this going to save you if you do implement it? If it saves you weeks of work once you have this feature, but it takes a couple of days, I suppose it's worth going though some tedious stuff.

But then again: I'd also check: are there other similar libraries with "higher scoring" "changeability metrics"

So in your specific case:

I wanted to modify Merlin 3d printer firmware

Is there any test with a mocked 3d printer to test this, or is this a case of compiling a custom framework, installing it on your actual printer, potentially bricking it if the framework is broken - etc etc

[–] RonSijm -4 points 8 months ago (4 children)

Ok, sure. So in a tech race, if energy is a bottleneck - and we'd be pouring $7tn into tech here - don't you think some of the improvements would be to Power usage effectiveness (PUE) - or a better Compute per Power Ratio?

[–] RonSijm 1 points 8 months ago

What benefits to “AI supremacy” are there?

I wasn't saying there was any, I was saying there are benefits to the race towards it.

In the sense of - If you could pick any subject that world governments would be in a war about - "the first to the moon", "the first nuclear" or "first hydrogen bomb", or "the best tank" - or "the fastest stealth air-bomber"

I think if you picked a "tech war" (AI in this case) - Practically a race of who could build the lowest nm fabs, fastest hardware, and best algorithms - at least you end up with innovations that are useful

[–] RonSijm 2 points 8 months ago (13 children)

For all our sakes, pray he doesn’t get it

It doesn't really go into why not.

If governments are going to be pouring money into something, I'd prefer it to be in the tech industry.

Imagine a cold-war / Oppenheimer situation where all the governments are scared that America / Russia / UAE will reach AI supremacy before {{we}} do? Instead of dumping all the moneyz into Lockheed Martin or Raytheon for better pew pew machines - we dump it into better semiconductor machinery, hardware advancements, and other stuff we need for this AI craze.

In the end we might not have a useful AI, but at least we've made progression in other things that are useful

[–] RonSijm 1 points 8 months ago (1 children)

https://github.com/awslabs/llrt/raw/main/benchmarks/llrt-ddb-put.png
https://github.com/awslabs/llrt/raw/main/benchmarks/node20-ddb-put.png

Maybe I'm just stupid, but what are these numbers?

"HTTP benchmarks measured in round trip time for a cold start"

Soo, I'm guessing it's round trip time in milliseconds?

What is p0 to p100? Are they putting 0 to a 100 items? Are they putting 1 item into a dataset of size p..?

[–] RonSijm 1 points 8 months ago* (last edited 8 months ago) (1 children)

Well @ @TheGrandNagus and @SSUPII - I think a lot of Firefox users are power users. And a lot of the non-power Firefox users, like my friends and family, they're only using Firefox because I recommended them to use it, and I installed all the appropriate extensions to optimize their browser experience.

So if Firefox alienates the power users - who are left? I'm gonna move on to Waterfox or Librewolf, but they are even more next-level obscure browsers. My non-tech friends know about Chrome, Edge, and Firefox, so I can convince them to use one of those... But I kinda doubt I can get them to use Librewolf. If I tell them Firefox sucks now too, they'll probably default to chrome

view more: ‹ prev next ›