I do this, too, and I've been wishing there were a setting I could set where kbin would just auto-hide content submitted by accounts that have been blocked by at least X other accounts.
ignirtoq
If you're on a desktop (or other large screen), click on the user name to go to their user page and there's a block button in the sidebar on the right. If you're on a mobile device (or other small screen), go to their user page and the block button should be prominent on the right of the "follow" button.
I think a critical detail getting overlooked in the broader discussion of the changes brought by LLM AI is not quality but quantity. What I mean is, sure, AI isn't going to replace any one complete worker. There are vanishingly few jobs AI can 100% take over. But it can do 80% of a few jobs, 50% of more jobs, and 20% of a lot of jobs.
So at the company level, where you had to hire 100 workers to do something, now you only need 80, or 50, or 20. That's still individual people who are out of their entire job because AI did some or most of it, and their bosses consolidated the rest of the responsibilities onto the remaining workers.
"Calls" and "puts" are types of contracts about buying/selling stocks (they aren't the stock themselves but are centered around a given stock and its trading price, so they are called "derivatives" as they are "derived" from the stock).
A put is a contract that allows the buyer of the contract to sell stock at an agreed upon price to the seller of the contract, regardless of the current trading price. They are used for a variety of reasons. In one usage, someone who is buying some of the stock at the current trading price may also buy a "put" on the stock at a slightly lower price. This way, they spend a little more money at the time of buying the stock, but if the trading price plummets, they can still sell it at that slightly lower "put" price and not lose too much money.
In this case, the idea would be to buy a "put" (without buying the stock at the same time) when the buyer thinks the stock's trading price is overvalued. Then when the price falls below the "puts" agreed upon value, buy the stock at the lower price and immediately invoke the contract to sell at the "put"s higher price.
Whether or not data was openly accessible doesn’t really matter [...] ChatGPT also isn’t just reading the data at its source, it’s copying it into its training dataset, and that copying is unlicensed.
Actually, the act of copying a work covered by copyright is not itself illegal. If I check out a book from a library and copy a passage (or the whole book!) for rereading myself or some other use that is limited strictly to myself, that's actually legal. If I turn around and share that passage with a friend in a way that's not covered under fair use, that's illegal. It's the act of distributing the copy that's illegal.
That's why whether the AI model is publicly accessible does matter. A company is considered a "person" under copyright law. So OpenAI can scrape all the copyrighted works off the internet it wants, as long as it didn't break laws to gain access to them. (In other words, articles freely available on CNN's website are free to be copied (but not distributed), but if you circumvent the New York Times' paywall to get articles you didn't pay for, then that's not legal access.) OpenAI then encodes those copyrighted works in its models' weights. If it provides open access to those models, and people execute these attacks to recover pristine copies of copyrighted works, that's illegal distribution. If it keeps access only for employees, and they execute attacks that recover pristine copies of copyrighted works, that's keeping the copies within the use of the "person" (company), so it is not illegal. If they let their employees take the copyrighted works home for non-work use (or to use the AI model for non-work use and recover the pristine copies), that's illegal distribution.
It doesn't have to have a copy of all copyrighted works it trained from in order to violate copyright law, just a single one.
However, this does bring up a very interesting question that I'm not sure the law (either textual or common law) is established enough to answer: how easily accessible does a copy of a copyrighted work have to be from an otherwise openly accessible data store in order to violate copyright?
In this case, you can view the weights of a neural network model as that data store. As the network trains on a data set, some human-inscrutable portion of that data is encoded in those weights. The argument has been that because it's only a "portion" of the data covered by copyright being encoded in the weights, and because the weights are some irreversible combination of all of such "portions" from all of the training data, that you cannot use the trained model to recreate a pristine chunk of the copyrighted training data of sufficient size to be protected under copyright law. Attacks like this show that not to be the case.
However, attacks like this seem only able to recover random chunks of training data. So someone can't take a body of training data, insert a specific copyrighted work in the training data, train the model, distribute the trained model (or access to the model through some interface), and expect someone to be able to craft an attack to get that specific work back out. In other words, it's really hard to orchestrate a way to violate someone's copyright on a specific work using LLMs in this way. So the courts will need to decide if that difficulty has any bearing, or if even just a non-zero possibility of it happening is enough to restrict someone's distribution of a pre-trained model or access to a pre-trained model.
Here I thought they meant women in domestic abuse situations who are still trying to get out. Like discussed here.
I think you are right that optimising engineering cost is the goal of these practices, but I believe it is a bad thing.
In the end the only people that benefit from this are the owners of the product [...]
Yes, that's exactly how the for-profit software industry (and really any for-profit industry) is run. The owners maximize their benefit. If you want to change that, that's a much different problem on a much larger scale, but you will not see a for-profit company do anything but that.
I thought the point of "clean code" was to make a software source code base comprehensible and maintainable by the people who are in charge of working with and deploying the code. When you optimize for people reading the code rather than some kind of performance metrics, I would expect performance improvements when you switch to performance optimization. The trade-off here is now code that's more performant, but that's more difficult to read, with interdependence and convolution between use cases. It's harder to update, which means it's slower and more costly (in engineering resources) to upgrade.
In a lot of modern software, you don't need extreme performance. In the fields that do, you'll find guidelines and other resources that explain what paradigms to avoid and what are outright forbidden. For example, I have heard of C++ exceptions and object-oriented features being forbidden in aircraft control software, for many of the reasons outlined in this article. But not everyone writes aircraft control code, so rather than saying clean code is "good" or clean code is "bad," like a lot of things this should be "it depends on your needs."
That sounds more like a modern reinterpretation of "protecting religion from the state." The context of the origin of the separation of church and state from the late 18th century was more about religious adherence being closely tied to political power, so you could deal your political opponents harm by branding them a participant of a socially outcast religion, or you could use political power to (legally) persecute the followers of a non-state religion. Yes, it was about protecting religion from the state, but it was in more concrete terms of protecting the followers of non-state-backed religions, rather than preventing some kind of philosophical corruption of the moral foundations of the religion.
The way posts are shared between instances is by user subscription. For example, if there's a community on Lemmy and a Kbin user subscribes to it, Kbin will then receive new posts from that Lemmy instance for that community.
So if no one on an instance is subscribed to that community, new posts won't flow to that instance. And then if you do subscribe to it, the instance will only automatically receive new posts. Federation will not back-fill older content.
Yeah, and the article is wrong, though only slightly. They seem to be confusing watts (power, energy over time) with Joules (energy, power times a duration of time). They give a passable definition in the beginning ("energy transfer"), but they seem to misunderstand what the "transfer" part means exactly.
If you find-replace all instances of "watt" with "watt-hour" after that starting definition, it would be more accurate. That's why I say it's only slightly wrong.