We all have 21 dives in the 14 oceans.
DrNeurohax
I've seen nothing about neurodivergent harassing here.
Harassing assholes? Sure. Perhaps yo have trouble separating the two?
Even better, this must be fantastic when you're training AI models with millions of images. The compression level AND performance should be a game changer.
Yeah, that looks more reasonable. The original graph makes it look like there have been ~5x the number of deaths in the last few years compared to ~10 years ago. Adjusted for population growth, it's ~2-3x.
That's still really concerning and makes the point the article was making, while being much more accurate and defensible when scrutinized. Thanks for that!
Thank you so much for this! It reminded me to revisit my library's general resources and look specifically for which archive collections they had available. I'm 1 state over, so I figured there was a good chance we would have Newspapers.com Library Edition access here.
The main/default collection my library sent me to was no help, but they had a Newspapers.com Library Edition portal listed further down. Final-fucking-ly got it. I really, really appreciate the help.
I'm sure it's a fine service, if you want to use it regularly, but I just wanted 1 tiny thing. If they had a $1 for an obit or a page deal, sure. Instead, there's this whole microcosm of bullshit where some are archived, others available, some omitted from public collections, some on different 3rd party sites, etc.
The family paid for an obit. It wasn't in the 1800s. The paper has been digitized. I should be able to go to the paper with the name, exact date, and city and find it. They literally say it doesn't exist. Not that it's on our archive site or our partner site, just nothing.
I would have thrown a couple bucks to any of the sites for access, but no, I need to sign up for a subscription, give them all my details, get spam calls for the next 100 years, just no. Super frustrating.
Exactly. I stumbled across this report from the AZ Dept of Health which breaks it down into per 100k people and the data still supports the author's point. The report then goes on to divide up the population by age, residents vs visitors, county, etc.
Hell, the FT author could have just included a plot of the population growth, which was pretty linear. Not great, but better than nothing.
Grinds my gears.
Just thought I'd add this report from the AZ health department. This breaks down the factors MUCH better and comes to a similar, but not quite as extreme, conclusion. Only part is normalized for population, but it gives an idea of how to scale the numbers.
Yes. Hot air is thinner, so there's less lift on aircraft wings. There's actually a conversion they're supposed to use that basically says, 'At this temp, treat the plane as if it's actually at this other, much higher, altitude."
Here's one of the recent videos I've seen mentioning it (around 5 min in they mention the "density altitude"). I'm not a pilot and just find the stuff interesting.
I'm not advocating for better or worse. In the end, the data shows what it shows. I'm just saying that there was essentially no "analysis", making any interpretation inappropriate.
Hey, more people should survive, thanks to newer medical treatments and more concentration of populations around cities.
On the flip side, there's a larger portion of the population that's older and from out of state.
In between there's the chance that the threat of heat-related health problems should be much diminished due to widespread access to air conditioning. But, that also means more people haven't had first hand experience with heat exhaustion/stroke, and don't realize how quickly things can go from kinda bad to dead.
Yeah, it can be as simple as the death certificates requiring only a primary cause of death.
Old man collapses from a heart attack while trying to change a tire on a hot desert road? Cause of death: heart attack. If more details are requested, they could probably get away with just claiming age-related health issues. The guy is dead, no foul play, the case is closed.
Oh, I've just been toying around with Stable Diffusion and some general ML tidbits. I was just thinking from a practical point of view. From what I read, it sounds like the files are smaller at the same quality, require the same or less processor load (maybe), are tuned for parallel I/O, can be encoded and decoded faster (and there being less difference in performance between the two), and supports progressive loading. I'm kinda waiting for the catch, but haven't seen any major downsides, besides less optimal performance for very low resolution images.
I don't know how they ingest the image data, but I would assume they'd be constantly building sets, rather than keeping lots of subsets, if just for the space savings of de-duplication.
(I kinda ramble below, but you'll get the idea.)
Mixing and matching the speed/efficiency and storage improvement could mean a whole bunch of improvements. I/O is always an annoyance in any large set analysis. With JPEG XL, there's less storage needed (duh), more images in RAM at once, faster transfer to and from disc, fewer cycles wasted on waiting for I/O in general, the ability to store more intermediate datasets and more descriptive models, easier to archive the raw photo sets (which might be a big deal with all the legal issues popping up), etc. You want to cram a lot of data into memory, since the GPU will be performing lots of operations in parallel. Accessing the I/O bus must be one of the larger time sinks and CPU load becomes a concern just for moving data around.
I also wonder if the support for progressive loading might be useful for more efficient, low resolution variants of high resolution models. Just store one set of high res images and load them in progressive steps to make smaller data sets. Like, say you have a bunch of 8k images, but you only want to make a website banner based on the model from those 8k res images. I wonder if it's possible to use the the progressive loading support to halt reading in the images at 1k. Lower resolution = less model data = smaller datasets to store or transfer. Basically skipping the downsampling.
Any time I see a big feature jump, like better file size, I assume the trade off in another feature negates at least half the benefit. It's pretty rare, from what I've seen, to have improvements on all fronts.