douglasg14b

joined 2 years ago
[–] douglasg14b 1 points 1 year ago* (last edited 1 year ago) (2 children)

There is no context here though?

If this is a breaking change to a major upgrade path, like a major base UI lib change, then it might not be possible to be broken down into pieces without tripping or quadrupling the work (which likely took a few folks all month to achieve already).

I remember in a previous job migrating from Vue 1 to Vue 2. And upgrading to an entirely new UI library. It required partial code freezes, and we figured it had to be done in 1 big push. It was only 3 of us doing it while the rest of the team kept up on maintenance & feature work.

The PR was something like 38k loc, of actual UI code, excluding package/lock files. It took the team an entire dedicated week and a half to review, piece by piece. We chewet through hundreds of comments during that time. It worked out really well, everyone was happy, the timelines where even met early.

The same thing happened when migrating an asp.net .Net Framework 4.x codebase to .Net Core 3.1. we figured that bundling in major refactors during the process to get the biggest bang for our buck was the best move. It was some light like 18k loc. Which also worked out similarly well in the end .

Things like this happen, not that infrequently depending on the org, and they work out just fine as long as you have a competent and well organized team who can maintain a course for more than a few weeks.

[–] douglasg14b 1 points 1 year ago* (last edited 1 year ago) (1 children)

Just a few hundred?

That's seems awfully short no? We're talking a couple hours of good flow state, that may not even be a full feature at that point 🤔

We have folks who can push out 600-1k loc covering multiple features/PRs in a day if they're having a great day and are working somewhere they are proficient.

Never mind important refactors that might touch a thousand or a few thousand lines that might be pushed out on a daily basis, and need relatively fast turnarounds.

Essentially half of the job of writing code is also reviewing code, it really should be thought of that way.

(No, loc is not a unit of performance measurement, but it can correlate)

[–] douglasg14b 1 points 1 year ago (1 children)

Someone who shares their experiences gained from writing real world software, with introspection into the dynamics & struggles involved?

Your age (or mostly career progression, which is correlated) may actually be a reason you have no interest in this.

[–] douglasg14b 4 points 1 year ago* (last edited 1 year ago) (1 children)

Like most large conceptual practices the pain comes when it misused, mismanaged, and misunderstood.

DDD like Agile, when applied as intended, adds more to success than it detracts. This means that others take it and try to use it as a panacea, and inappropriately apply their limited and misunderstood bastardization of it, having the opposite effect.

Which leads to devs incorrectly associating these concepts & processes to the pain they have, instead of recognizing a bad implementation as a bad implementation.

Personally, I've found great success by applying DDD where necessary and as needed, modifying it to best fit my needs. (Emphasis mine). I write code with fewer bugs, which is more easily understood, that enforces patterns & separations that improve productivity, faster than I ever have before. This is not because I "went DDD", it's because I bought the blue book, read it, and then cherry picked out the parts that work well for my use cases.

And that's the crux of it. Every team, every application, every job is different. And that difference requires a modified approach that takes DevX & ergonomics into consideration. There is no one-size-fits-all solution, it ALWAYS needs to be picked at and adjusted.


To answer your question

Yes, I have had lots of pain from DDD. However, following the principals of pain driven development, when that pain arises we reflect, and then change our approach to reduce or eliminate that pain.

Pain is unavoidable, it's how you deal with it that matters. Do you double down and make it worse, or do you stop, reflect, fix the pain, refactor, and move on with an improved and more enlightened process?

It's literally just "agile", but for developer experience.

[–] douglasg14b 2 points 1 year ago* (last edited 1 year ago)

System.Text.Json routinely fails to be ergonomic, it's quite inconvenient overall actually.

JSON is greedy, but System.Text.Json isn't, and falls over constantly for common use cases. I've been trying it out on new projects every new releases since .net core 2 and every time it burns me.

GitHub threads for requests for sane defaults, more greedy behavior, and better DevX/ergonomics are largely met with disdain by maintainers. Indicating that the state of System.Text.Json is unlikely to change...

I really REALLY want to use the native tooling, that's what makes .Net so productive to work in. But JSON handling & manipulation is an absolute nightmare still.

Would not recommend.

[–] douglasg14b 18 points 1 year ago* (last edited 1 year ago) (4 children)

And what does it imply?

That an AI might be better at writing documentation than the average dev, who is largely inept at writing good documentation?

Understandably, as technical writing isn't exactly a focus point or career growing thing for most devs. If it was, we would be writing much better code as well.

I've seen my peers work, they could use something like this. I'd welcome it.

[–] douglasg14b 11 points 2 years ago* (last edited 2 years ago) (8 children)

I do feel like C# saw C++ and said "let's do that" in a way.

One of the biggest selling points about the language is the long-term and cross repo/product/company..etc consistency. Largely the language will be very recognizable regardless of where it's written and by who it's written due to well established conventions.

More and more ways to do the same thing but in slightly different ways is nice for the sake of choices but it's also making the language less consistent and portable.

While at the same time important language features like discriminated unions are still missing. Things that other languages have started to build features for by default. C# is incredibly "clunky" in comparison to say Typescript solely from a type system perspective. The .Net ecosystem of course more than makes up for any of this difference, but it's definitely not as enjoyable to work with the language itself.

[–] douglasg14b 8 points 2 years ago* (last edited 2 years ago) (1 children)

The great thing about languages like C# is that you really don't need to "catch up". It's incredibly stable and what you know about C#8 (Really could get away with C# 6 or earlier) is more than enough to get you through the grand majority of personal and enterprise programming needs for the next 5-10 years.

New language versions are adding features, improving existing ones, and improving on the ergonomics. Not necessarily breaking or changing anything before it.

That's one of the major selling points really, stability and longevity. Without sacrificing performance, features, or innovation.

[–] douglasg14b 2 points 2 years ago (1 children)

Yessss.

C#/.Net backends are the best. The long term stability, DevX, and the "it just works" nature of all the tooling makes it a great choice. It's also fast, which makes scaling for most applications a non-issue.

I've switched to postgres for DB from SQL server, have never looked back, would recommend.

[–] douglasg14b 3 points 2 years ago* (last edited 2 years ago) (1 children)

.Net + EF Core + Vue/TS + Postgres. Redis as needed, Kafka as needed.

I can get applications built extremely quickly, and their maintenance costs are incredibly low. The backends are stable, and can hang around for 5, 10+ years without issue or problems with ecosystem churn.

You can build a library of patterns and reusable code that you can bring to new projects to get them off the ground even faster.

Would recommend.

[–] douglasg14b 3 points 2 years ago* (last edited 2 years ago) (1 children)

No matter how low overhead you get, it's still node, which means it's still an actual order of magnitude behind Go & Asp.Net Core (~600k RPS raw Node, ~7mill RPS with asp.net core & go). Which means 10x the compute costs for the same outcomes.

It's not a bad thing, to be clear, but the underlying technology has issues that frameworks on top of it can't really address.

Also the meme of "yet-another-framework", which may or may not be in some state of deprecation, abandonment, or incompatibility in 5 years.

view more: ‹ prev next ›