kersplort

joined 2 years ago
[–] kersplort 9 points 9 months ago* (last edited 9 months ago)

If you want to level up your game, find a new job, or grow into a new role, by all means take a course or training on your own time. All of the concerns that you listed are probably worth spending dedicated time to upskill on.

If you stay in this field for much longer, you're going to run into a lot of cases where the thing you've been doing is replaced with the New Thing. The New Thing will have a couple new ideas, but will also fundamentally handle the same concerns as your Old Thing, often in similar ways. Don't spend your free time chasing the New Thing without getting something out of it - getting paid, making a project that you wanted to make anyways, contributing to New Thing open source projects.

If you sink work into the New Thing without anyone willing to pay for it, that's fine, but it means that you might never get someone to pay for it. Most companies are more than willing to hire experienced Old Thing devs on New Thing jobs, and will give you some time to skill up.

[–] kersplort 1 points 1 year ago
[–] kersplort 2 points 1 year ago (2 children)

It's not a fully controlled environment, that is the point of smokes.

[–] kersplort 1 points 1 year ago

Polling is certainly useful, but at some point introducing reliability degrades effectiveness. I certainly want to know if the app is unreachable over the open internet, and I absolutely need to know if a partner's API is down.

[–] kersplort 1 points 1 year ago

Wherever possible, this is a good idea. The campsite rule - tests don't touch data they didn't bring with them - helps as well.

However, many end to end tests exist as a pipeline, especially for entities that are core to the business function of the app. Cramming all sequentiality into single tests will give you all the problems described, but in a giant single test that you need to fish out the result for.

[–] kersplort 2 points 1 year ago (6 children)

My experience with E2E testing is that the tools and methods necessary to test a complex app are flaky. Waits, checks for text or selectors and custom form field navigation all need careful balancing to make the test effective. On top of this, there is frequently a sequentiality to E2E tests that causes these failures to multiply in frequency, as you're at the mercy of not just the worst test, but the product of every test in sequence.

I agree that the tests cause less flakiness in the app itself, but I have found smokes inherently flaky in a way that unit and integration tests are not.

[–] kersplort 6 points 1 year ago (4 children)

My team has just decided to make working smokes a mandatory part of merging a PR. If the smokes don't work on your branch, it doesn't merge to main. I'm somewhat conflicted - on one hand, we had frequent breaks in the smokes that developers didn't fix, including ones that represented real production issues. On the other, smokes can fail for no reason and are time consuming to run.

We use playwright, running on github actions. The default free tier runner has been awful, and we're moving to larger runners on the platform. We have a retry policy on any smokes that need to run in a step by step order, and we aggressively prune and remove smokes that frequently fail or don't test for real issues.

34
submitted 1 year ago* (last edited 1 year ago) by kersplort to c/experienced_devs
 

End to end and smoke tests give a really valuable angle on what the app is doing and can warn you about failures before they happen. However, because they're working with a live app and a live database over a live network, they can introduce a lot of flakiness. Beyond just changes to the app, different data in the environment or other issues can cause a smoke test failure.

How do you handle the inherent flakiness of testing against a live app?

When do you run smokes? On every phoenix branch? Pre-prod? Prod only?

Who fixes the issues that the smokes find?

[–] kersplort 2 points 1 year ago* (last edited 1 year ago)

Get good at the three point turn.

  • Add the new code path/behavior. Release - this can be a minor version in semver.
  • Mark the old code path or behavior as deprecated. Release - this can be another minor version.
    • In between here, clean up any dependencies or give your users time to clean up.
  • Remove the old code path or behavior. Release. If you're using semver, this is the major version change.

This is a stable way to make changes on any system that has a dependency on another platform, repository, or system. It's good practice for anything on the web, as users may have logged in or long running sessions, and it works for systems that call each other and get released on different cadences.

[–] kersplort 1 points 1 year ago

I know your point is that people should use real judgement, but that's a great line to draw for people who need it.

Is naming consistency important enough to break compatibility? No, absolutely not.

[–] kersplort 2 points 1 year ago

We use a little bit of property testing to test invariants with fuzzed data. Mutation testing seems like a neat inverse.

[–] kersplort 2 points 1 year ago

I think the best thing to do with TDD is pair with or convince devs to try it for a feature. Coming at things test first can be novel and interesting, and it does train you to test and use tests better. Once people have tried it, I think it broadens your use of tests pretty well.

However, TDD can be a bit of a cult, and most smart and independent people (like people willing to work at a <20 person company) will notice that TDD isn't the silver bullet it's proponents make it out to be.

[–] kersplort 1 points 1 year ago

YAML works better with git than JSON, but so much config work is copy and pasting and YAML is horrible at that.

Having something where changing one line doesn't turn into changing three lines, but you could also copy it off a website would be great.

43
submitted 1 year ago* (last edited 1 year ago) by kersplort to c/experienced_devs
 

I'm like a test unitarian. Unit tests? Great. Integration tests? Awesome. End to end tests? If you're into that kind of thing, go for it. Coverage of lines of code doesn't matter. Coverage of critical business functions does. I think TDD can be a cult, but writing software that way for a little bit is a good training exercise.

I'm a senior engineer at a small startup. We need to move fast, ship new stuff fast, and get things moving. We've got CICD running mocked unit tests, integration tests, and end to end tests, with patterns and tooling for each.

I have support from the CTO in getting more testing in, and I'm able to use testing to cover bugs and regressions, and there's solid testing on a few critical user path features. However, I get resistance from the team on getting enough testing to prevent regressions going forward.

The resistance is usually along lines like:

  • You shouldn't have to refactor to test something
  • We shouldn't use mocks, only integration testing works.
    • Repeat for test types N and M
  • We can't test yet, we're going to make changes soon.

How can I convince the team that the tools available to them will help, and will improve their productivity and cut down time having to firefight?

29
What's your favorite CICD tool? (self.experienced_devs)
 

What's something you've gotten into your CICD pipeline recently that you like?

I recently automated a little bot for our GitHub CICD. It runs a few tests that we care about, but don't want to block deployment, and posts them on the PR. It uses gh pr comment --edit-last so it isn't spammint the channel. It's been pretty helpful in automating some of the more annoying parts of code review.

27
Site Stability (self.meta)
submitted 1 year ago by kersplort to c/meta
 

The site's been down in the morning for the last couple days. Running a new server that gets attention is tough - do the admins for this site need anything from this community? Volunteer time? Money?

view more: next ›