Firefox
The latest news and developments on Firefox and Mozilla, a global non-profit that strives to promote openness, innovation and opportunity on the web.
You can subscribe to this community from any Kbin or Lemmy instance:
Related
- Firefox Customs: [email protected]
- Thunderbird: [email protected]
Rules
While we are not an official Mozilla community, we have adopted the Mozilla Community Participation Guidelines as far as it can be applied to a bin.
Rules
-
Always be civil and respectful
Don't be toxic, hostile, or a troll, especially towards Mozilla employees. This includes gratuitous use of profanity. -
Don't be a bigot
No form of bigotry will be tolerated. -
Don't post security compromising suggestions
If you do, include an obvious and clear warning. -
Don't post conspiracy theories
Especially ones about nefarious intentions or funding. If you're concerned: Ask. Please don’t fuel conspiracy thinking here. Don’t try to spread FUD, especially against reliable privacy-enhancing software. Extraordinary claims require extraordinary evidence. Show credible sources. -
Don't accuse others of shilling
Send honest concerns to the moderators and/or admins, and we will investigate. -
Do not remove your help posts after they receive replies
Half the point of asking questions in a public sub is so that everyone can benefit from the answers—which is impossible if you go deleting everything behind yourself once you've gotten yours.
view the rest of the comments
After reading this article, I predict Firefox AI software ChatGPT feature built into Firefox in the future. This can be very useful if done right. Just hope there is an OFF switch for anyone who doesn’t want to use it.
@[email protected] @[email protected] local LLM execution times can be very fast on recent consumer hardware. No need to send anywhere, just like their translation - do it all on-device.
As an example, with no optimization or GPU support, my @[email protected] (AMD) generates around 5 characters/sec from a 4 gigabyte pre-quantized model.