this post was submitted on 24 Aug 2024
610 points (99.4% liked)

Programmer Humor

32562 readers
358 users here now

Post funny things about programming here! (Or just rant about your favourite programming language.)

Rules:

founded 5 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 47 points 3 months ago (3 children)

That's why you write your protocol as a sync library, then implement the async IO separately and mapping the data over the protocol modules.

[–] [email protected] 37 points 3 months ago (1 children)

I... Don't know what this means

[–] [email protected] 44 points 3 months ago (2 children)

So basically your typical network protocol is something that converts an async stream of bytes into things like Postgres Row objects. What you do then is you write a synchronous library that does the byte conversion, then you write an asynchronous library that talks with the database with async functions, but most of the business logic is sync for converting the data coming from the async pipe.

Now, this can also be done in a higher level application. You do a server that is by nature async in 2024. Write the server part in async, and implement a sync set of mapping functions which take a request coming in and returns a response. This can be sync. If you need a database, this sync set of functions maps a request to a database query, and your async code can then call the database with the query. Another set of sync functions maps the database result into http response. No need to color everything async.

The good part with this approach is that if you want to make a completely sync version of this library or application, you just rewrite the async IO parts and can reuse all the protocol business logic. And you can provide sync and async versions of your library too!

[–] [email protected] 10 points 3 months ago (1 children)

This approach is so much nicer than the threading/queuing approaches we used to have. One async showed up, a ton of the work go pulled out of protocol handing and distributed subsystem sync efforts.

Long lived the multi threaded C++ server buffer! Today, async beging to rule the roost.

[–] [email protected] 4 points 3 months ago (1 children)

It kind of fails with certain protocols. I once wrote an async MSSQL client for Rust, and some data doesn't say its size in the headers. So this kind of forced the business logic to be async too.

[–] [email protected] 3 points 3 months ago

Yeah, those durn data size fields. At first you're like "why would you do this? It's specified in the spec, right?" Then you start consuming the data stream and go "oh, yeah need this".

I was doing some driver work for a real time location tracking board. The serial stream protocol was very well documented and designed. Plenty of byte length count fields, though.

[–] [email protected] 10 points 3 months ago (1 children)

What is computer science degree?

[–] [email protected] 14 points 3 months ago (1 children)

Never had one, just partied in the uni and dropped out :D

[–] [email protected] 7 points 3 months ago
[–] bitfucker 6 points 3 months ago

Long live sans io!

[–] [email protected] 6 points 3 months ago

Or just make a bunch of static helpers >:)