this post was submitted on 18 Dec 2023
18 points (95.0% liked)
Python
6413 readers
4 users here now
Welcome to the Python community on the programming.dev Lemmy instance!
📅 Events
Past
November 2023
- PyCon Ireland 2023, 11-12th
- PyData Tel Aviv 2023 14th
October 2023
- PyConES Canarias 2023, 6-8th
- DjangoCon US 2023, 16-20th (!django 💬)
July 2023
- PyDelhi Meetup, 2nd
- PyCon Israel, 4-5th
- DFW Pythoneers, 6th
- Django Girls Abraka, 6-7th
- SciPy 2023 10-16th, Austin
- IndyPy, 11th
- Leipzig Python User Group, 11th
- Austin Python, 12th
- EuroPython 2023, 17-23rd
- Austin Python: Evening of Coding, 18th
- PyHEP.dev 2023 - "Python in HEP" Developer's Workshop, 25th
August 2023
- PyLadies Dublin, 15th
- EuroSciPy 2023, 14-18th
September 2023
- PyData Amsterdam, 14-16th
- PyCon UK, 22nd - 25th
🐍 Python project:
- Python
- Documentation
- News & Blog
- Python Planet blog aggregator
💓 Python Community:
- #python IRC for general questions
- #python-dev IRC for CPython developers
- PySlackers Slack channel
- Python Discord server
- Python Weekly newsletters
- Mailing lists
- Forum
✨ Python Ecosystem:
🌌 Fediverse
Communities
- #python on Mastodon
- c/django on programming.dev
- c/pythorhead on lemmy.dbzer0.com
Projects
- Pythörhead: a Python library for interacting with Lemmy
- Plemmy: a Python package for accessing the Lemmy API
- pylemmy pylemmy enables simple access to Lemmy's API with Python
- mastodon.py, a Python wrapper for the Mastodon API
Feeds
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
While I largely agree with the net result, I think this mantra kind of misses the point and could easily lead to dogmatic programming.
Mocking external tools is bad because it indicates poor code and testing structure. We want our tests to be predictable and indicate issues only if something directly related to the test changes behavior. If a bug is introduced, ideally only one or a small handful of tests will fail. If the network fails, ideally no tests fail. It's all about limiting the ways your test could fail to just the thing you're testing.
For example, if that client library expects an extra argument (maybe auth token), every function that uses that call could fail if it's not mocked, whereas if it's mocked, only the one or two tests that hit the API more completely would fail.
This leaves code untested, but that's fine because it'll be caught with integration testing. Unit testing tests units, and we want that units to be as small and independent as possible.
So it's not about whether you own the code, it's about being very clear about what it is you're testing. Let's say I have the following setup:
In this case, I'd mock b when testing a, mock c when testing b, and mock external_library when testing c. It has nothing to do with whether I own it, only whether I'm testing it. If I'm mocking external_library everywhere, that's a code smell that my code is too tightly coupled to that external library. The same applies to b or c. It has nothing to do with whether mocking an external thing is good or bad, but whether I want that coupling in my code.
Well structured code doesn't need this "rule," and and poorly structured code should be caught before getting to the point of writing tests (unless you're doing TDD, but that's a dev strategy, not a testing strategy per se).