Balinares

joined 1 year ago
[–] [email protected] 10 points 3 months ago (5 children)

Firefox's stance on privacy, like Apple's, is to some extent branding. Arguably it always was. You should still use Firefox (or any other third party browser) if it works for you. Ecosystem diversity matters.

[–] [email protected] 13 points 3 months ago (7 children)

They didn't drop the don't be evil thing. It's still right there in the code of conduct where it always was, they just moved it to the conclusion of the document so it's the last thing that remains with you. See for yourself: https://abc.xyz/investor/google-code-of-conduct/

The supposed removal is a perfect example of the outrage-bait headlines I'm discussing in another comment.

[–] [email protected] 5 points 3 months ago

It's not the company it once was, but there are also a lot of outrage-bait headlines about it that don't hold up well to scrutiny.

For instance, there have been a lot of Lemmy posts about Chrome supposedly removing the APIs used by adblockers. I figured I'd validate that on my own by switching to the version of uBlock that is based on the new API. Well... As it turns out, it works fine. It's also faster.

Mind you, figuring out the actual facts behind each post gets exhausting, and people just shutting down and avoiding the problem space entirely makes some sort of sense. That, and it is healthy for an ecosystem to have alternatives, so I'd keep encouraging usage of Firefox and such if only on that basis alone.

[–] [email protected] 13 points 3 months ago

This is actually an excellent question.

And for all the discussions on the topic in the last 24h, the answer is: until a postmortem is published, we don't actually know.

There are a lot of possible explanations for the observed events. Of course, one simple and very easy to believe explanation would be that the software quality processes and reliability engineering at CrowdStrike are simply below industry standards -- if we're going to be speculating for entertainment purposes, you can in fact imagine them to be as comically bad as you please, no one can stop you.

But as a general rule of thumb, I'd be leery of simple and easy to believe explanations. Of all the (non-CrowdStrike!) headline-making Internet infrastructure outages I've been personally privy to, and that were speculated about on such places as Reddit or Lemmy, not one of the commenter speculations came close to the actual, and often fantastically complex chain of events involved in the outage. (Which, for mysterious reasons, did not seem to keep the commenters from speaking with unwavering confidence.)

Regarding testing: testing buys you a certain necessary degree of confidence in the robustness of the software. But this degree of confidence will never be 100%, because in all sufficiently complex systems there will be unknown unknowns. Even if your test coverage is 100% -- every single instruction of the code is exercised by at least one test -- you can't be certain that every test accurately models the production environments that the software will be encountering. Furthermore, even exercising every single instruction is not sufficient protection on its own: the code might for instance fail in rare circumstances not covered by the test's inputs.

For these reasons, one common best practice is to assume that the software will sooner or later ship with an undetected fault, and to therefore only deploy updates -- both of software and of configuration data -- in a staggered manner. The process looks something like this: a small subset of endpoints are selected for the update, the update is left to run in these endpoints for a certain amount of time, and the selected endpoints' metrics are then assessed for unexpected behavior. Then you repeat this process for a larger subset of endpoints, and so on until the update has been deployed globally. The early subsets are sometimes called "canary", as in the expression "canary in a coal mine".

Why such a staggered deployment did not appear to occur in the CrowdStrike outage is the unanswered question I'm most curious about. But, to give you an idea of the sort of stuff that may happen in general, here is a selection of plausible scenarios, some of which have been known to occur in the wild in some shape or form:

  • The update is considered low-risk (for instance, it's a minor configuration change without any code change) and there's an imperious reason to expedite the deployment, for instance if it addresses a zero-day vulnerability under active exploitation by adversaries.
  • The update activates a feature that an important customer wants now, the customer phoned a VP to express such, and the VP then asks the engineers, arbitrarily loudly, to expedite the deployment.
  • The staggered deployment did in fact occur, but the issue takes the form of what is colloquially called a time bomb, where it is only triggered later on by a change in the state of production environments, such as, typically, the passage of time. Time bomb issues are the nightmare of reliability engineers, and difficult to defend against. They are also, thankfully, fairly rare.
  • A chain of events resulting in a misconfiguration where all the endpoints, instead of only those selected as canaries, pull the update.
  • Reliabilty engineering not being up to industry standards.

Of course, not all of the above fit the currently known (or, really, believed-known) details of the CrowdStrike outage. It is, in fact, unlikely that the chain of events that resulted in the CrowdStrike outage will be found in a random comment on Reddit or Lemmy. But hopefully this sheds a small amount of light on your excellent question.

[–] [email protected] 1 points 4 months ago

One funny thing about humans is that they aren't just gloriously fallible: they also get quite upset when that's pointed out. :)

Unfortunately, that's also how you end up with blameful company cultures that actively make reliability worse, because then your humans make just the same amounts of mistakes, but they hide them -- and you never get a chance to evolve your systems with the safeguards that would have prevented these.

[–] [email protected] 7 points 4 months ago (2 children)

I had no idea DF had macros but it makes so much sense.

[–] [email protected] 48 points 4 months ago

For serious. I wish they hired remote.

[–] [email protected] 7 points 4 months ago (1 children)

Multiple cursors are fantastic for certain use cases, but will not help you when each line needs a different input -- if you're swapping arguments in function calls, if you're replacing one bracket type with another around contents of arbitrary length, etc.

Mind you, if your objective here is to not learn a new thing, then you can just go ahead and do that, you don't need an excuse.

[–] [email protected] 33 points 4 months ago (9 children)

Where editors usually have editing shortcuts, vim has an editing grammar.

So you can copy (or select, or replace, or delete, or any other editing verb) N arguments or blocks or lines or functions or any entity for which vim has an editing noun, or around or inside either of these, and you only need to remember a few such editing verbs and nouns and adjectives in order to immediately become much more effective.

It's so effective that switching back to a regular editor feels annoyingly clunky. (I guess that's why many offer vim plugins these days.)

Better: you can record entire editing sentences and replay them. Ever had to make the same change on dozens of lines? Now you can do it in seconds.

Now of course, replaying a sentence, or several sentences, is also a sentence of its own that you can replay in another file if you want.

It's neat. :)

[–] [email protected] 2 points 4 months ago (1 children)

I know right? Some countries are much better about it though. In Ireland, Varadkar and Martin recently shared the Taoiseach (prime minister) role when neither of their parties won enough seats to form a government. There wasn't much fuss about it; it was just a reasonable compromise, so they went and did it.

view more: ‹ prev next ›