this post was submitted on 20 Oct 2024
3 points (56.5% liked)

Programming

17482 readers
202 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities [email protected]



founded 1 year ago
MODERATORS
 

Hi friends, as promised, I'm back with my second post. I'll be hanging around in the comments for any questions!

In this post, I take a look at a typical deployment process, how long each part of it takes, and then I present a simple alternative that I use which is much faster and perfect for hobbit software.

all 11 comments
sorted by: hot top controversial new old
[–] Kissaki 14 points 1 month ago (1 children)

So it really is that simple: a small bash script, building locally, rsync'ing the changes, and restarting the service. It's just the bare essentials of a deployment. That's how I deploy in 10 seconds.

I'm strongly opposed to local builds on any semi-important or semi-complex production product or system.

Tagged CI release builds give you a lot of important guarantees involved in release concerns.

I'll take the fresh checkout and release build time cost for those consistency and versioned source state guarantees.

[–] [email protected] 2 points 1 month ago (1 children)

I would imagine you could run into an issue like this building off an M1 or newer Mac and deploying to a Linux based env. We've run into a bit of an adjustment with our docker image builds where we need to set the buildarch or else it fails to deploy.

Our build times aren't blazingly fast, typically around 4 minutes for npm/yarn build for frontend apps and loading the data to the image and any other extras like composer installs. Best time saving for us was doing a base image for all the dependency junk that we do a nightly on

[–] [email protected] 2 points 1 month ago

This was exactly the problem in my last environment. I was the second dev and two more were onboarded after me, but everyone had issues replicating the original dev’s local environment in order to deploy.

First thing I did was set up a basic gitops pipeline. Worked like a charm.

[–] [email protected] 8 points 1 month ago (1 children)

Your proposed solution to overly complex systems seems to be to ignore the requirements that make them complex in the first place. If that works for you, this is a perfectly fine approach. But most companies with actual signed SLAs won't accept "we'll just have a few seconds of downtime/high latency every time a developer deploys something to production #yolo".

[–] [email protected] 6 points 1 month ago (1 children)

Also, series F but they're only deploying on one server? Try scaling that to a real deployment (200+ servers) with millions of requests going through and see how well that goes.

And also no way their process passes ISO/SOC 2/PCI certifications. CI/CD isn't just "make do things", it's also the process, the logs, all the checks done, mandatory peer reviews. You can't just deploy without the audit logs of who pushed what when and who approved it.

[–] BatmanAoD 6 points 1 month ago (1 children)

You're not wrong, but not everything needs to scale to 200+ servers (...arguably almost nothing does), and I've actually seen middle managers assume that a product needs that kind of scale when in fact the product was fundamentally not targeting a large enough market for that.

Similarly, not everything needs certifications, but of course if you do need them there's absolutely no getting around it.

[–] [email protected] 3 points 1 month ago* (last edited 1 month ago)

For sure, in PCI environments this doesn’t work. And in the Series F company we don’t use this approach for that very reason. But there’s tons of companies that don’t have or need external certifications, and it works for that much more common scenario. For the small web (i.e. most of the web), it’s ideal.

The important takeaway isn’t “wow, doing production builds on your PC isn’t secure.” Do it on a dedicated box in production, then. The important takeaway is there’s a mountain of slow things (GitHub workers, docker caching, etc) which slow developer velocity, and we should design systems and processes which remove or eliminate those pains.

[–] ericjmorey 6 points 1 month ago

I'm not sure I understand the trade offs you're choosing by deploying this way. The benefit of simplicity an speed of deployment seems clear from your write-up. But are those the most important considerations? Why or why not?

[–] [email protected] 2 points 1 month ago (1 children)

this has the same logic as saying npm install takes a while so just don’t use libraries

[–] [email protected] 1 points 1 month ago