Cyno

joined 2 years ago
[–] Cyno 1 points 1 day ago

This is good advice, thanks. I will definitely get it written down and approved eventually, but the issue is with

It looks very open in your case. Is there no standard or precedent for what you are doing? Something you could lean on?

We're treading new water and doing stuff that nobody in the company is experienced with. We're getting some ideas thrown at us but that's the reason for this topic, I don't feel knowledgeable yet to decide if they are in the right or if they are selling us hammers they like while we actually need something else.

On the other hand, if I just do the simplest dumbest thing for an MVP, I am again just being a hammer and seeing everything as a nail when I should be learning, adapting and applying the correct tool? I kinda want to use the opportunity and do it better or learn something new.

[–] Cyno 2 points 1 day ago* (last edited 1 day ago) (2 children)

Were your project managers always so technically capable? In my experience they represent the business and while they have to ultimately sign off for the development to start, they don't come up with architecture and design of the solution itself - that should come from the developers/engineering team. At the very least the devs propose possible options and their costs/tradeoffs and then the management picks one, but it's not like they will come down and tell you whether you should use SQL, Postgres, Mongo or w/e database.

[–] Cyno 1 points 1 day ago (4 children)

(Customer) specifications rarely have technical implementations described down to the most basic detail though. It also won't account for every possible technical problem that could arise, customers generally don't know or care about those.

Maybe if you're a junior in a very professional and experienced company, you can expect the perfect documented jira ticket that could be at that point solved by putting it into chat gpt, but in most cases you will be expected to solve and anticipate the unknowns, especially if you're in a more senior position.

 

I will frame the question in terms of a specific C# objective that I am working on right now but I imagine the question is a pretty general one related to the Dunning-Kruger effect, in a way - how do you know how to build an application when you don't know all the issues you are supposed to prevent?

There is a message hub and I am developing a consumer for it. The original plan was to just create a few background services that get initialized alongside the application, that have a loop to load new messages/events and then process them.

Some time has passed and it feels like I am knees deep in Wolverine, Quartz, Hangfire, MassTransit, transactional outbox and all manner of different related things now. The potential issues are dealing with downtime, preventing loss of messages by storing them in a separate table before processing them, and everyone on the planet has a different idea on how to prevent and solve them and none of them sound simple and straightforward.

Honestly at this point I don't know enough about which problems are going to appear down the line and if I need to use third party libraries, but I am guessing they exist for a reason and people aren't supposed to just manually create their own amateurish implementations of them instead? But how do you know where to draw a line when you don't know exactly the problems that you are supposed to be solving?
What are the problems with having a table for the message queue over a whole 3rd party library for it, or what's wrong with the MS BackgroundService class? How are you supposed to know this?

[–] Cyno 1 points 2 months ago* (last edited 2 months ago) (1 children)

You are probably right and I just misunderstood fixtures / collections and how they work. I am now trying to configure it using postgres testcontainers and just letting each test create its own but facing a bunch of other issues so not even sure how this works anymore, seems like every tutorial has a different approach. Some just put all the code for creating containers in the setup/dispose of the test class itself instead of trying to be smart with the WebApplicationFactory fixtures and maybe I just end up doing that

[–] Cyno 1 points 2 months ago* (last edited 2 months ago) (3 children)

My first intent was to just have one local sqlite test db that would get reset to empty state before the tests run (EnsureDeleted+EnsureCreated), and then they all run concurrently on it. It sounded simple to setup and simple enough for my small crud app that only had a few tests.

My second intent was for the framework to create a new in-memory sqlite db for each test so I could fix the problem with tests failing when I'd run all of them at the same time, presumably because they all referenced the same db.

I am currently trying to complicate my life further in the hopes it helps with this by using a postgres database instead, and then in the IntegrationTests project I'm using TestContainers to get a PostgreSqlContainer. I am currently suffering because of some change I made so my tests aren't even being found anymore now, despite being listed in the test explorer when I run them I get "Test discovery finished: 0 Tests found" in output. Honestly I think I'm just gonna give up integration testing like this, it's been a complete waste of time so far.

Dunno what else I could say about my project that is relevant, it's a standard webapp crud with 2 controllers and the integration tests projects has facts like this. Very basic stuff I'd say. Unit tests are a separate project and will just be for simple method checks, no mocking (or at least as little as possible)

[–] Cyno 1 points 2 months ago* (last edited 2 months ago) (5 children)

Configuring a DbContextFactory in the WebAppFactory instead of a DbContext breaks my services, they can't resolve DbContext anymore so all requests from my test classes fail. Either I misunderstood you or how this works, but it makes sense - I need to properly fix the injectable DbContext so it fixes it everywhere and not just add a DbContextFactory for test classes while the actual code still injects a DbContext.

Configuring the DbConnection service scope as Transient didn't change anything.

I might consider efficiency and speed later but for now I'd be happy to just get it working on this simple CRUD app with 2 test classes, I've spend hours trying various google solutions and I'm a bit frustrated there is no simple guide for something that should be so seemingly simple at this point.

 

I'm a bit confused whether I'm doing this right because every resource I google for has a different way of setting it up.

Some of them initialize the dbContext right in the test class, some do in the WebAppFactory's ConfigureServices (or is it ConfigureTestServices?).

Some do it in the IAsyncLifetime's InitializeAsync, some do it as a dependency injection and other examples just put it as a member variable in the factory.

I don't wanna code dump my project here and ask for someone to solve it but I am not sure anymore what to do. My current attempt is using an sqlite database and it is breaking when I try to run all the tests at the same time due to this.

Makes sense since they are all using the same db in this case so I tried following a guide and just letting them use the :memory: one but that one, for some reason, doesn't seem to initialize the database at all and tests fail because the database doesn't have any tables.

I also added a CollectionDefinition with an ICollectionFixture for each individual test class (one per controller so far) thinking this might cause each test to have its own separate database (or factory?) but that didn't really do anything.

I'm hoping someone experienced can probably immediately recognize what am I missing, or at the very least give me a solid resource that I could read to figure this out, but any help is appreciated.

[–] Cyno 1 points 2 months ago* (last edited 2 months ago)

Ohh I mixed it up with FluentValidations, you are right. I never liked unit tests depending on libraries like that either tbf, vanilla xunit aint that bad either

[–] Cyno 4 points 2 months ago* (last edited 2 months ago) (2 children)

This whole thing is just a nice reminder to not go overboard and use a 3rd party library when it's completely unnecessary. I never had a need to use sth like fluent validations when you can do pretty much the same thing by writing the validation method directly in your Dto, it's a bit verbose but at least it also lets you have more control over the whole thing. Maybe I just never used it on a scale enough to justify the library but it seems completely superfluous, dunno

[–] Cyno 5 points 4 months ago

So, we all agree this is obvious rage-bait and just trolling? Nobody is actually believing this is true or that anyone feels like that, or even if they do that they need to be acknowledged and validated by addressing such a ridiculous claim?

Right...? Don't feed the trolls is the OG internet rule, I wish it weren't forgotten...

[–] Cyno 4 points 4 months ago* (last edited 4 months ago)

I only used obsidian for a few weeks so i didnt get that used to it, but what you mean could be the mental switch from hierarchical file structure in obsidian to logseq's journaling/time based one? You're supposed to organize data with tags rather than remembering their location and structure in folders. I spend most time searching for tags, not specific files, and in that way it's functional enough for me, although I do not really understand the query syntax yet so I am unable to create more complex searches in this way. Tbh I'm hoping the sqllite switch lets me just write direct SQL

For a specific example, instead of having folders like Software > Programming > csharp > my projects > projectx ... I will just have a page for the project that has tags #programming #csharp #myprojects etc. And then I can search for #myproject and see all relevant info for it, even sorted by the date when i added it which adds some nice historical context

[–] Cyno 5 points 5 months ago (2 children)

I switched to Logseq from Obsidian since I preferred FOSS and it's been a good experience so far. They are working on a big update to switch to an sqlite db for storage which should help with performance (and I hope improve the search experience) so that's exciting too.

[–] Cyno 15 points 5 months ago* (last edited 5 months ago)

It's no reddit in terms of quantity but honesty I've had higher quality topics and discussions here than there. Lemmy/kbin might not have taken off in the mainstream to offer a variety of subjects but when it comes to tech and software I think it's covered well enough and people are generally nicer about it. The main problem is lack of (remotely) good seach function, I dont think the threads are getting indexed by google and the on-site search is atrocious.

I don't know of any discord programming communities, I wish forums were still a thing but the only live one I know of is the jellyfin one after they moved from reddit. Other than that it's here or the various subreddits

 

I've been migrating to linux recently and next headache on the list are my starr apps (sonarr, radarr, etc). On windows I just had them installed as background services but I wanted to give (rootless) podman a try on linux since everyone kept recommending it and saying how much better the experience is than on windows.

Anyway, I've set everything up and some of the services work, but specifically sonarr and radarr can't write to the main media folder with the error: Folder '/data/media/tv/' is not writable by user 'abc'

So, first of all, I didn't make user 'abc', it is some internal docker/podman/starr user allegedly and it's supposed to be mapped to my real user, which I did by providing the PUID=1000, PGID=1000 env variables.

Second, I tried to give read and write permissions to everyone for the placeholder folders but it didn't change anything. I don't think this is the issue since other services like the one for sabnzbd or jellyfin had no problems using folders I created.

Googling for the issue brought up some topics about NFS shares but I don't know anything about this - this is not a NAS or even some external drive, it's just podman installed on fedora.

Any help is appreciated, here's a pastebin of my compose file if it's relevant https://pastebin.com/uX9Saqvj

 

I could swear that my mouse is lagging on my second monitor but I don't how how to actually "prove" it, if there's even a way. I am dual booting windows with fedora workstation gnome and there is a noticeable sluggishness to my mouse control whenever I switch back and forth, but only on the secondary monitor. It is slight but it's messing with my muscle memory and constantly making me overshoot clicks and buttons. The main display seems to be fine, or at least it's less pronounced due to higher monitor refresh rate.

Is there any way I can measure it objectively and find the root cause? A diagnostic tool or an app that could test if something is wrong? It's a recent fedora installation and I've gone through all the nvidia driver and media setup steps at this fedora post-install guide but honestly I don't even know if this is a fedora, gnome, driver or wayland issue (or something else completely)

27
What exactly is GNOME? (self.linux4noobs)
submitted 6 months ago by Cyno to c/linux4noobs
 

Dumb title but I didn't know how else to put this into words, bear with me for a sec - I am not just looking for the definition.

Years ago I tried Ubuntu which used GNOME and assumed that its desktop layout was "the default" GNOME. I later tried PopOS which also uses it and it was the same, and when eventually I installed Mint I saw that it's still fundamentally the same with some slight tweaks or different tools.

Well, few days ago I installed Bazzite (Fedora) which is also GNOME. It doesn't look anything like anything I've seen before, either in terms of mindset or technical layout. I've gone from an admittedly old-fashioned, but efficient and reliable!, layout and workflow to something that reminds me more of an apple product - its stylish, minimalist yet inefficient and utterly frustrating to get anything done with because of how opinionated it is.

When searching for common problems I often found comments saying stuff like "but try it out! it's in the spirit of gnome, it takes a while to get used to it but the philosophy is valid" and frankly I don't understand anymore what exactly gnome is and what are its design principles, if there even are any and every distro just does whatever the f it wants and call it "a gnome experience".

 

I need to remote desktop connect to a windows PC on a local network. This works flawlessly when done from my windows PC but I'm having issues on Linux Mint.

I'm using Remmina since it was the most common answer to a linux RDP client. I imported the RDP file from windows but I also created a connection with manually filled info.

First issue is that linux can't connect to the machine by its name - on windows ping MYPC-321 works, on linux mint it throws an error. However, ping MYPC-321.local does work, but if I try to use that as the address in Remmina, it fails again. Is there a way to connect using just name since I dont want to have to recheck the IP address every day?

But let's say this is for now resolved if I just use the local IP address. The second, main problem, is authentication. No matter what I put into the username and domain fields of Remmina's authentication GUI, it always instantly fails and Remmina reloads the screen without giving me any error. The credentials are the same as when connecting from the windows PC (although I dont have to specify the domain there) so I have no idea what could be the problem here.

Is there something else I'm missing, something fundamentally different about how this works on linux? I wasn't expecting for such a simple and straightforward thing to instantly cause issues.

 

cross-posted from: https://programming.dev/post/18636248

I've always approached learning Linux by just diving into it and bashing my head against problems as they come until I either solve them or give up, the latter being the more common outcome.

I wouldn't take this approach with other pieces of software though - I'd read guides, best practices, have someone recommend me good utility tools or extensions to install, which shortcuts to use or what kind of file hierarchy to use, etc.
For example, for python I'd always recommend the "Automate the boring stuff with Python", I remember learning most Java with that "Head first Java" book back in the days, c# has really good official guides for all concepts, libraries, patterns, etc.

So... lemme try that with Linux then! Are there any good resources, youtube videos, bloggers or any content creators, books that go explain everything important about linux to get it running in an optimal and efficient way that are fun and interesting to read? From things like how the file hierarchy works, what is /etc, how to install new programs with proper permissions, when to use sudo, what is a flatpak and why use it over something else, how to backup your system so you can easily reconstruct your setup in case you need to do an OS refresh, etc? All those things that people take for granted but are actually a huge obstacle course + minefield for beginners?

And more importantly, that it's up to date with actually good advice?

 

I've always approached learning Linux by just diving into it and bashing my head against problems as they come until I either solve them or give up, the latter being the more common outcome.

I wouldn't take this approach with other pieces of software though - I'd read guides, best practices, have someone recommend me good utility tools or extensions to install, which shortcuts to use or what kind of file hierarchy to use, etc.
For example, for python I'd always recommend the "Automate the boring stuff with Python", I remember learning most Java with that "Head first Java" book back in the days, c# has really good official guides for all concepts, libraries, patterns, etc.

So... lemme try that with Linux then! Are there any good resources, youtube videos, bloggers or any content creators, books that go explain everything important about linux to get it running in an optimal and efficient way that are fun and interesting to read? From things like how the file hierarchy works, what is /etc, how to install new programs with proper permissions, when to use sudo, what is a flatpak and why use it over something else, how to backup your system so you can easily reconstruct your setup in case you need to do an OS refresh, etc? All those things that people take for granted but are actually a huge obstacle course + minefield for beginners?

And more importantly, that it's up to date with actually good advice?

11
submitted 8 months ago* (last edited 8 months ago) by Cyno to c/learn_programming
 

I understand the basic principle but I have trouble determining what is the hard line separating responsibilities of a Repository or a Service. I'm mostly thinking in terms of c# .NET in the following example but I think the design pattern is kinda universal.

Let's say I have tables "Movie" and "Genre". A movie might have multiple genres associated with it. I have a MovieController with the usual CRUD operations. The controller talks to a MovieService and calls the CreateMovie method for example.

The MovieService should do the basic business checks like verifying that the movie doesn't already exist in the database before creating, if all the mandatory fields are properly filled in and create it with the given Genres associated to it. The Repository should provide access to the database to the service.

It all sounds simple so far, but I am not sure about the following:

  • which layer should be responsible for column filtering? if my Dto return object only returns 3 out of 10 Movie fields, should the mapping into the return Dto be done on the repository or service layer?

  • if I need to create a new Genre entity while creating a new movie, and I want it to all happen in a single transaction, how do I do that if I have to go through MovieRepository and GenreRepository instead of doing it in the MovieService in which i don't have direct access to the dbcontext (and therefore can't make a transaction)?

  • let's say I want to filter entries specifically to the currently logged in user (every user makes his own movie and genre lists) - should I filter by user ID in the MovieService or should I implement this condition in the repository itself?

  • is the EF DbContext a repository already and maybe i shouldn't make wrappers around it in the first place?

Any help is appreciated. I know I can get it working one way or another but I'd like to improve my understanding of modern coding practices and use these patterns properly and efficiently rather than feeling like I'm just creating arbitrary abstraction layers for no purpose.

Alternatively if you can point me to a good open source projects that's easy to read and has examples of a complex app with these layers that are well organized, I can take a look at it too.

 

Let's say I am making an app that has table Category and table User. Each user has their own set of categories they created for themselves. Category has its own Id identity that is auto-incremented in an sqlite db.

Now I was thinking, since this is the ID that users will be seeing in their url when editing a category for example, shouldn't it be an ID specific only to them? If the user makes 5 categories they should see IDs from 1 to 5, not start with 14223 or whichever was the next internal ID in the database. After all when querying the data I will only be showing them their own categories so I will always be filtering on UserId anyway.

So let's say I add a new column called "UserSpecificCategoryId" or something like that - how do I make sure it is autogenerated in a safe way and stays unique per user? Do I have to do it manually in the code (which sounds annoying), use some sort of db trigger (we hate triggers, right?) or is this something I shouldn't even be bothering with in the first place?

 

Let's say I have a method that I want to make generic, and so far it had a big switch case of types.

For an simplified example,

switch (field.GetType()) {
case Type.Int: Method((int)x)...
case Type.NullInt: Method((int?)x)...
case Type.Long: Method((long)x)...

I'd like to be able to just call my GenericMethod(field) instead and I'm wondering if this is possible and how would I go around doing it.

GenericMethod(field)

public void GenericMethod<T>(T field)

Can I use reflection to get a type and the pass it into the generic method somehow, is it possible to transform Type into ?

Can I have a method on the field object that will somehow give me a type for use in my generic method?

Sorry for a confusing question, I'm not really sure how to phrase it correctly, but basically I want to get rid of switch cases and lots of manual coding when all I need is just the type (but that type can't be passed as generic from parent class)

 

To clarify, I mean writing scripts that generate or modify classes for you instead of manually writing them every time, for example if you want to replace reflection with a ton of verbose repetitive code for performance reasons I guess?

My only experience with this is just plain old manual txt generation with something like python, and maintaining legacy t4/tt VS files but those are kind of a nightmare.

What's a good modern way of accomplishing this, have there been any improvements in this area?

view more: next ›