this post was submitted on 09 Jul 2023
84 points (97.7% liked)

Programming

17504 readers
31 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities [email protected]



founded 1 year ago
MODERATORS
 

Is there some formal way(s) of quantifying potential flaws, or risk, and ensuring there's sufficient spread of tests to cover them? Perhaps using some kind of complexity measure? Or a risk assessment of some kind?

Experience tells me I need to be extra careful around certain things - user input, code generation, anything with a publicly exposed surface, third-party libraries/services, financial data, personal information (especially of minors), batch data manipulation/migration, and so on.

But is there any accepted means of formally measuring a system and ensuring that some level of test quality exists?

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 2 points 1 year ago

I use coverage tools like nyc/c8, but I can easily get 100% coverage on buggy, exploitable, and unstable code. You can have two projects, both with 100% coverage, and one be a shit show and the other be rock solid - so I was wondering if there's a way to measure quality of tests, or to identify code that really needs extra attention (despite being 100%). Mutation testing has been suggested and that's really interesting, I'm going to give it a go tomorrow and see what it throws up!