RonSijm

joined 1 year ago
[–] RonSijm 18 points 8 months ago (1 children)

and, perhaps more critically, some Chinese GPU makers from utilizing CUDA code with translation layers.

Like that ever deterred China from violating copyright claims to trademarks. Maybe if they're huge companies that want to export, but if they're just making in-country chips, especially if it's useful for the Chinese government, these companies are not going to change anything based on some license warning

[–] RonSijm 7 points 8 months ago* (last edited 8 months ago)

A scope is already implied by brackets. For example, a namespace, class, method, if block are also scopes.

So I don't really see why you'd want an explicit scope keyword inside methods, when all other scopes are implied... That just creates an inconsistency with the other implied scopes

[–] RonSijm 1 points 8 months ago (1 children)

It depends on the language, since you mentioned you don't want to do manual testing -

Start with a mono-repo, as in, 1 repo where you add every other repo as a git submodule

Then, every time something changes you run that repo though the build server, and validate that it at least compiles.

If it compiles, you can go a step further, build something that detects changes, for example by parsing the syntax tree of everything changed, then check the syntax tree of the entire project which other methods / objects might be affected. In dotnet you'd do this with a Roslyn Analyzer

[–] RonSijm 4 points 8 months ago

What if you do it ironically? Like calling yourself a Code Ninja Jedi 10x Rockstar 🚀?

[–] RonSijm 1 points 8 months ago (2 children)

I've never really needed to use rebase, but my workflow is probably kinda weird:

  • I just start programming from the dev branch
  • At some point once I have stuff to commit, I do git checkout -b new_feature_branch, and so move my changes to a different branch
  • I commit a bunch of stuff into that branch, usually just with commit messages of "working on new_feature"
  • Once I'm done, and I have - lets say - 10 commits in that branch:
  • I do git reset head~10 meaning all 10 commits are reverted into staged changes
  • I now do 1 new commit of all the changes, with a decent commit message to explain the new feature
  • I git push -f the new commit back to origin (of feature branch)
  • I PR the feature branch to dev, and merge it

It works pretty well for me, but I was told its weird, and I should rebase instead

[–] RonSijm 2 points 9 months ago
  1. He doesn’t mention performance impacts, but I suspect this would impact performance.

It looks like he's using an Analyzer to generate interceptors. More detailed on how those interceptors works is pretty good explained here: https://www.youtube.com/watch?v=91xir2oUQPg

So performance wise, this shouldn't change the performance impact any more than manually wiring in loggers on every level. - It might slow down the compile time by a little bit though.

A project that does something very similar but for Mediators is Mediator.SourceGenerator - an alternative to MediatR - but that project does it for Mediators instead of logging

The "original" way of doing something like this was by using Castle DynamicProxy, and creating an interceptor at runtime. Which would affect runtime performance more than doing it compile time

And another alternative way of doing the same thing would be using Fody - Creating logging attributes or interfaces, and then using Fody to wire in logging. Though Fody would be doing it during post-compile though an IL Weaver.

So this Roslyn Source Generator weaving approach is basically the best way to approach this

[–] RonSijm 3 points 9 months ago

I think I couple of those points come down to the tech-lead to write a "Definition of Done"

1 - This is useful for junior members or new members to know what to expect. For example, "Definition of Done" would include that a new function should be documented, it should be refactored into clean code by the code standards, it should be tested by QA, there should be unittests covering the function, the function should be released to production to be marked as done - etc

2 - When giving managers that don't know anything about coding an estimation - either by you, or by someone in your team - they can take the Definition of Done" as a reference point. If a manager asks how long something will take, you don't just consider "Oh, I guess I can build this in a couple of days". Yea ok sure, you can build it to meet the managers minimal requirements for the function to kinda work but its messy code and untested - so if you keep in mind that there are loads of other meta-things to do besides just building code, you can pretty much double your initial estimation

Otherwise you just accumulate more and more technical dept, and at some point your "just build it" estimation gets inflated because for every change you have to touch lots of 1000 line files, figure out what broke by the changes, fix that, see what failed now, etc etc

And it would have been better in the long run if you would have spend more time on it while you were working on the function

[–] RonSijm 1 points 9 months ago (2 children)
Are you actually trying to use apple_pay, or it that just an irrelevant error you’re not expecting?

No, like I said, apple_pay is disabled ( willingly ) in the stripe dashboard, so I don’t know why the error mention even apply_pay…

Well it wasn't clear whether you were trying to use apply_pay and it magically worked in firefox, but not in Chrome, or Chrome incorrectly things you're trying to use apply_pay...

Have you explicitly declared which payment methods are allowed to be used in your script? Maybe if you haven't declared anything the browser just infers it somehow, and Firefox and Chrome might have a difference in inferring default value

[–] RonSijm 6 points 9 months ago

And the problem with startup is poor pay, with too much skill requirement.

Yea, that's the problem with startups, they're poor, so by their logic "we only have money for 1 person" - "so if we hire a full stack that does everything, that's cheapest." - "What is the cheapest dev? An intern / junior." - "So what if we get a junior full stack :bigbrain: "

And then they create a vacancy for a CEO that can build their entire start-up and label it a junior-full-stack

[–] RonSijm 9 points 9 months ago* (last edited 9 months ago) (2 children)

Almost like as if people are looking to hire a one-man army to handle the entire department.

Well yea, that's usually the point of full stack. If you want to do something like that, you probably want to work a smaller scale company... Like if you're in a team of 5 people, the situation arises of "Oh sysop thing needs to be done, who to ask? I guess @RonSijm (backend dev) is close enough..."

So to have a junior full stack is pretty counterintuitive. Otherwise the situation arises of "Oh xyz needs to be done, who to ask? - Well be have a dedicated senior backend engineer, a dedicated senior front-end engineer, dedicated senior sysops.... 'Oh let me ask @[email protected], this junior full-stack'" - yea no.

Why are you aiming to be an intern/early-career full-stack engineer? The only kinda company I can think of where something like that would be something with barely any IT, where you're just the "jack of all trades" goto guy for IT stuff - so that you can be the one-man army that does everything

So honestly I'd focus on one area first - backend, frontend, dev/sys-ops - especially as you're mentioning

I’ve wasted most of my time worrying about the stack

Yea that gets even worse when you have worry about the entire stack, and work with an entire stack of components you're not really familiar with. If you're at least somewhat senior in one part - lets say backend - at least you're in a position of "Ok, I have a backend that I'm comfortable about" - "Now lets see if I can make a frontend for it" - or - "Lets see if I can manage to dockerize this, and host it somewhere."

And if you know the fundamentals of one stack-part first (data-structures, design patterns, best practices) - you can apply that knowledge to other areas and expand from there

[–] RonSijm 1 points 9 months ago

Do you mean their code is already setup with some kind of output to terminal that you can use to add a unit test into as well?

I don’t even recall what I was messing with awhile back, I think it was Python, but adding a simple print test didn’t work. I have no idea how they were redirecting print(), but that was a wall I didn’t get past at the time.

Yea, probably not every language has a concept of unittests, but basically test code.

Like if you have a calculator, there would be a test (outside of the real project) of like
If Calculator.Calculate(2 + 2) then assert outcome = 4

That way - if lets say the calculator only does + operations - you could still copy that test line, and create a new test of
If Calculator.Calculate(5 * 5) then assert outcome = 25

Your test will fail initially, but you can just run through it in a debugger, step into the code, figure out where it's most appropriate to add a * operator function, and then implement it, and see your test success.

Other benefit of that is that if you submit your change as PR, with the test the repo maintainer doesn't have to determine whether your code actually works just by looks, or actually running the calculator, your test proves you've added something useful that works (and you didn't break the existing tests)

That stuff seems like a language on top of a language for me, and when it errors I get really lost.

If you're just programming for yourself without the intent to submit it for a PR, you can just throw away the linter file. But I mentioned it was good to have in a project, because if there were multiple people working on it, all with their own style, the code can become a mess quite fast

I get sick of something that annoys me and want to go in and fix the issue despite being completely unqualified, but naive enough to try.

Well, I mean, that's basically all the things right? You start completely unqualified, mess around for a while, and after a while you're them more-qualified next time...

With stuff like Marlin, I seem to like the hardware side of things.

Just messing around with stuff you like is a good way to learn - though in my experience doing anything with hardware is way more difficult than just plain software. If you have to interface with hardware its very often pretty obscure stuff, like sending the correct hardware instructions to a driver, or to just "hardware pins" even... Like trying to start modifying a driver as a kind of starter project doesn't sound like something I'd recommend

[–] RonSijm 0 points 9 months ago

You're getting downvoted, but it's not the worst idea to ask it in the general sense:

Q: is which cases is denormalization of sql tables a good idea? especially PostgreSQL?

A:

Denormalization of SQL tables, including PostgreSQL, might be a good idea in the following cases:

  1. Read-Heavy Workloads: When your system mostly reads data from tables rather than writing or updating them, denormalization might enhance the performance of the system. It does this by minimizing the number of joins required to fetch the data.
  2. Performance Optimization: Denormalization can speed up data retrieval times by storing related data in a single table, reducing the need for complex, time-consuming joins.
  3. Complex Queries: If the database is dealing with extremely complicated queries, then making these queries simple, clear, and less complicated can be managed by denormalizing the tables.
  4. Reporting and Analytics: In a reporting database or data warehouse, denormalization helps in faster data retrieval and simplifies queries. It can also make it easier to process aggregated data and generate reports.
  5. Data Redundancy: To increase data redundancy, you might consider denormalization. It helps in data recovery as well.
  6. Schema Simplicity: Denormalization can also simplify the application code by reducing the number of tables that need to be addressed.

Remember, with denormalization comes redundancies and anomalies, more complex updates/inserts/deletes due to redundant data, along with increased storage costs due to redundancy. So the decision to denormalize should be made carefully considering all these aspects. It's often a balancing act between improved read performance versus drawbacks like increased storage requirements and potential data inconsistencies.

view more: ‹ prev next ›