RonSijm

joined 2 years ago
[–] RonSijm 3 points 1 year ago

I think I couple of those points come down to the tech-lead to write a "Definition of Done"

1 - This is useful for junior members or new members to know what to expect. For example, "Definition of Done" would include that a new function should be documented, it should be refactored into clean code by the code standards, it should be tested by QA, there should be unittests covering the function, the function should be released to production to be marked as done - etc

2 - When giving managers that don't know anything about coding an estimation - either by you, or by someone in your team - they can take the Definition of Done" as a reference point. If a manager asks how long something will take, you don't just consider "Oh, I guess I can build this in a couple of days". Yea ok sure, you can build it to meet the managers minimal requirements for the function to kinda work but its messy code and untested - so if you keep in mind that there are loads of other meta-things to do besides just building code, you can pretty much double your initial estimation

Otherwise you just accumulate more and more technical dept, and at some point your "just build it" estimation gets inflated because for every change you have to touch lots of 1000 line files, figure out what broke by the changes, fix that, see what failed now, etc etc

And it would have been better in the long run if you would have spend more time on it while you were working on the function

[–] RonSijm 1 points 1 year ago (2 children)
Are you actually trying to use apple_pay, or it that just an irrelevant error you’re not expecting?

No, like I said, apple_pay is disabled ( willingly ) in the stripe dashboard, so I don’t know why the error mention even apply_pay…

Well it wasn't clear whether you were trying to use apply_pay and it magically worked in firefox, but not in Chrome, or Chrome incorrectly things you're trying to use apply_pay...

Have you explicitly declared which payment methods are allowed to be used in your script? Maybe if you haven't declared anything the browser just infers it somehow, and Firefox and Chrome might have a difference in inferring default value

[–] RonSijm 6 points 1 year ago

And the problem with startup is poor pay, with too much skill requirement.

Yea, that's the problem with startups, they're poor, so by their logic "we only have money for 1 person" - "so if we hire a full stack that does everything, that's cheapest." - "What is the cheapest dev? An intern / junior." - "So what if we get a junior full stack :bigbrain: "

And then they create a vacancy for a CEO that can build their entire start-up and label it a junior-full-stack

[–] RonSijm 9 points 1 year ago* (last edited 1 year ago) (2 children)

Almost like as if people are looking to hire a one-man army to handle the entire department.

Well yea, that's usually the point of full stack. If you want to do something like that, you probably want to work a smaller scale company... Like if you're in a team of 5 people, the situation arises of "Oh sysop thing needs to be done, who to ask? I guess @RonSijm (backend dev) is close enough..."

So to have a junior full stack is pretty counterintuitive. Otherwise the situation arises of "Oh xyz needs to be done, who to ask? - Well be have a dedicated senior backend engineer, a dedicated senior front-end engineer, dedicated senior sysops.... 'Oh let me ask @[email protected], this junior full-stack'" - yea no.

Why are you aiming to be an intern/early-career full-stack engineer? The only kinda company I can think of where something like that would be something with barely any IT, where you're just the "jack of all trades" goto guy for IT stuff - so that you can be the one-man army that does everything

So honestly I'd focus on one area first - backend, frontend, dev/sys-ops - especially as you're mentioning

I’ve wasted most of my time worrying about the stack

Yea that gets even worse when you have worry about the entire stack, and work with an entire stack of components you're not really familiar with. If you're at least somewhat senior in one part - lets say backend - at least you're in a position of "Ok, I have a backend that I'm comfortable about" - "Now lets see if I can make a frontend for it" - or - "Lets see if I can manage to dockerize this, and host it somewhere."

And if you know the fundamentals of one stack-part first (data-structures, design patterns, best practices) - you can apply that knowledge to other areas and expand from there

[–] RonSijm 1 points 1 year ago

Do you mean their code is already setup with some kind of output to terminal that you can use to add a unit test into as well?

I don’t even recall what I was messing with awhile back, I think it was Python, but adding a simple print test didn’t work. I have no idea how they were redirecting print(), but that was a wall I didn’t get past at the time.

Yea, probably not every language has a concept of unittests, but basically test code.

Like if you have a calculator, there would be a test (outside of the real project) of like
If Calculator.Calculate(2 + 2) then assert outcome = 4

That way - if lets say the calculator only does + operations - you could still copy that test line, and create a new test of
If Calculator.Calculate(5 * 5) then assert outcome = 25

Your test will fail initially, but you can just run through it in a debugger, step into the code, figure out where it's most appropriate to add a * operator function, and then implement it, and see your test success.

Other benefit of that is that if you submit your change as PR, with the test the repo maintainer doesn't have to determine whether your code actually works just by looks, or actually running the calculator, your test proves you've added something useful that works (and you didn't break the existing tests)

That stuff seems like a language on top of a language for me, and when it errors I get really lost.

If you're just programming for yourself without the intent to submit it for a PR, you can just throw away the linter file. But I mentioned it was good to have in a project, because if there were multiple people working on it, all with their own style, the code can become a mess quite fast

I get sick of something that annoys me and want to go in and fix the issue despite being completely unqualified, but naive enough to try.

Well, I mean, that's basically all the things right? You start completely unqualified, mess around for a while, and after a while you're them more-qualified next time...

With stuff like Marlin, I seem to like the hardware side of things.

Just messing around with stuff you like is a good way to learn - though in my experience doing anything with hardware is way more difficult than just plain software. If you have to interface with hardware its very often pretty obscure stuff, like sending the correct hardware instructions to a driver, or to just "hardware pins" even... Like trying to start modifying a driver as a kind of starter project doesn't sound like something I'd recommend

[–] RonSijm 0 points 1 year ago

You're getting downvoted, but it's not the worst idea to ask it in the general sense:

Q: is which cases is denormalization of sql tables a good idea? especially PostgreSQL?

A:

Denormalization of SQL tables, including PostgreSQL, might be a good idea in the following cases:

  1. Read-Heavy Workloads: When your system mostly reads data from tables rather than writing or updating them, denormalization might enhance the performance of the system. It does this by minimizing the number of joins required to fetch the data.
  2. Performance Optimization: Denormalization can speed up data retrieval times by storing related data in a single table, reducing the need for complex, time-consuming joins.
  3. Complex Queries: If the database is dealing with extremely complicated queries, then making these queries simple, clear, and less complicated can be managed by denormalizing the tables.
  4. Reporting and Analytics: In a reporting database or data warehouse, denormalization helps in faster data retrieval and simplifies queries. It can also make it easier to process aggregated data and generate reports.
  5. Data Redundancy: To increase data redundancy, you might consider denormalization. It helps in data recovery as well.
  6. Schema Simplicity: Denormalization can also simplify the application code by reducing the number of tables that need to be addressed.

Remember, with denormalization comes redundancies and anomalies, more complex updates/inserts/deletes due to redundant data, along with increased storage costs due to redundancy. So the decision to denormalize should be made carefully considering all these aspects. It's often a balancing act between improved read performance versus drawbacks like increased storage requirements and potential data inconsistencies.

[–] RonSijm 1 points 1 year ago (4 children)

Do you have more context? Are you actually trying to use apple_pay, or it that just an irrelevant error you're not expecting?

When you say "everything work flawlessly under Firefox" - Did you actually go through the entire process, and eventually received your payment? Or do you just "don't get any errors" there? Cause it would seem kinda weird as well if the Stripe dashboard says apple_pay is not enabled, but apple_pay does work...

Are you targetting the same Stripe account, and both in the same mode? "Test Mode" vs "Live Mode" - in both browsers? "Test Mode" might complain less about permissions or something - Just making sure

[–] RonSijm 10 points 1 year ago* (last edited 1 year ago) (3 children)

Generally mostly by cyclomatic complexity:

  • How big are the methods overall

  • Do methods have a somewhat single responsibility

  • How is the structure, is everything inner-connected and calling each other, or are there some levels of orchestration?

  • Do they have any basic unittests, so that if I want to add anything, I can copypaste some test with an entrypoint close to my modifation to see how things are going

  • Bonus: they actually have linter configuration in their project, and consistent commonly used style guidelines

If the code-structure itself is good, but the formatting is bad, I can generally just run the code though a linter that fixes all the formatting. That makes it easier to use, but probably not something I'd actually contribute PRs to

How do you learn to spot these situations before diving down the rabbit hole? Or, to put it another way, what advice would you give yourself at this stage of the learning curve?

Probably some kind of metric of "If I open this code in an IDE, and add my modification, how long will it take before I can find a suitable entrypoint, and how long before I can test my changes" - if it's like half a day of debugging and diagnostics before I even can get started trying to change anything, it's seems a bit tedious

Edit: Though also, how much time is this going to save you if you do implement it? If it saves you weeks of work once you have this feature, but it takes a couple of days, I suppose it's worth going though some tedious stuff.

But then again: I'd also check: are there other similar libraries with "higher scoring" "changeability metrics"

So in your specific case:

I wanted to modify Merlin 3d printer firmware

Is there any test with a mocked 3d printer to test this, or is this a case of compiling a custom framework, installing it on your actual printer, potentially bricking it if the framework is broken - etc etc

[–] RonSijm -4 points 1 year ago (4 children)

Ok, sure. So in a tech race, if energy is a bottleneck - and we'd be pouring $7tn into tech here - don't you think some of the improvements would be to Power usage effectiveness (PUE) - or a better Compute per Power Ratio?

[–] RonSijm 1 points 1 year ago

What benefits to “AI supremacy” are there?

I wasn't saying there was any, I was saying there are benefits to the race towards it.

In the sense of - If you could pick any subject that world governments would be in a war about - "the first to the moon", "the first nuclear" or "first hydrogen bomb", or "the best tank" - or "the fastest stealth air-bomber"

I think if you picked a "tech war" (AI in this case) - Practically a race of who could build the lowest nm fabs, fastest hardware, and best algorithms - at least you end up with innovations that are useful

[–] RonSijm 2 points 1 year ago (13 children)

For all our sakes, pray he doesn’t get it

It doesn't really go into why not.

If governments are going to be pouring money into something, I'd prefer it to be in the tech industry.

Imagine a cold-war / Oppenheimer situation where all the governments are scared that America / Russia / UAE will reach AI supremacy before {{we}} do? Instead of dumping all the moneyz into Lockheed Martin or Raytheon for better pew pew machines - we dump it into better semiconductor machinery, hardware advancements, and other stuff we need for this AI craze.

In the end we might not have a useful AI, but at least we've made progression in other things that are useful

view more: ‹ prev next ›