Holy shit, switching to PyArrow is going to make me seem a mystical wizard when I merge in the morning. I’ve easily halved the execution time of a horrible but unavoidable job (yay crappy vendor “API” that returns a huge CSV).
Python
Welcome to the Python community on the programming.dev Lemmy instance!
📅 Events
Past
November 2023
- PyCon Ireland 2023, 11-12th
- PyData Tel Aviv 2023 14th
October 2023
- PyConES Canarias 2023, 6-8th
- DjangoCon US 2023, 16-20th (!django 💬)
July 2023
- PyDelhi Meetup, 2nd
- PyCon Israel, 4-5th
- DFW Pythoneers, 6th
- Django Girls Abraka, 6-7th
- SciPy 2023 10-16th, Austin
- IndyPy, 11th
- Leipzig Python User Group, 11th
- Austin Python, 12th
- EuroPython 2023, 17-23rd
- Austin Python: Evening of Coding, 18th
- PyHEP.dev 2023 - "Python in HEP" Developer's Workshop, 25th
August 2023
- PyLadies Dublin, 15th
- EuroSciPy 2023, 14-18th
September 2023
- PyData Amsterdam, 14-16th
- PyCon UK, 22nd - 25th
🐍 Python project:
- Python
- Documentation
- News & Blog
- Python Planet blog aggregator
💓 Python Community:
- #python IRC for general questions
- #python-dev IRC for CPython developers
- PySlackers Slack channel
- Python Discord server
- Python Weekly newsletters
- Mailing lists
- Forum
✨ Python Ecosystem:
🌌 Fediverse
Communities
- #python on Mastodon
- c/django on programming.dev
- c/pythorhead on lemmy.dbzer0.com
Projects
- Pythörhead: a Python library for interacting with Lemmy
- Plemmy: a Python package for accessing the Lemmy API
- pylemmy pylemmy enables simple access to Lemmy's API with Python
- mastodon.py, a Python wrapper for the Mastodon API
Feeds
You and me both. I've been parsing around 10-100 million row CSVs lately and...this will hopefully help.
LOL just use fscanf() you silly goose
Okay so would it be faster to convert it to something better and then do something faster with this better format?
Edit: I guess looking at the numbers, they’re already pretty low there. Idk how much faster it’d really be and whether not it’d be worth doing
What’s even the “gold standard” for logging stuff I guess?
What’s even the “gold standard” for logging stuff I guess?
structlog. Or just Structured Logging in general.
Don't do:
logging.info(f"{something} happened!")
But do
logging.info("thing-happened", thing=something)
Why? Your event will become a category, which means it's easily searchable/findable, you can output either human-readable stuff (the typical {date}, {loglevel}, {event}
) or just straight up JSONL (a JSON object/dict per line). If you have JSON logs you can use jq
to query/filter/manipulate your logs, if you have something like ELK, you can insert your logs there and create dashboards.
It's amazing - though it may break your brain initially.
That really depends on how much of it you're doing. If you're just handing a few times at a time, the difference between 0.1s and 3s isn't that big of a deal. If you're handling thousands or even millions in a day, it can be an order of magnitude cost savings to make it more efficient.
We use a CSVs at work, but it's not a common thing so we just use the built-in csv library. If we did more with it, pandas would be the way to go (or maybe we'd rewrite that service in Rust).
Also, regarding better formats: parquet is relatively nice. Smaller files, though not human readable. Use parquet if you read often, or have IO issues (file "too large" as CSV).