otter

joined 1 year ago
MODERATOR OF
[–] [email protected] 0 points 12 hours ago (1 children)

Wow nice, got the year perfectly on a few of those

[–] [email protected] 0 points 13 hours ago (1 children)

Now that they mention it, I would like step-by-step instructions on how a Fierce Burrito is made

1
Emovi #838 (emovi.teuteuf.fr)
[–] [email protected] 0 points 15 hours ago
🌎 Oct 31, 2024 🌍
🔥 1 | Avg. Guesses: 7.5
🟨🟧🟧🟥🟩 = 5

https://globle-game.com
#globle
[–] [email protected] 0 points 15 hours ago
I got Hexcodle #448 in 4! Score: 70%

🔽⏬✅⏬🔽🔼
🔽🔽✅⏬🔽🔼
✅🔽✅🔼✅✅
✅✅✅✅✅✅

https://hexcodle.com
[–] [email protected] 0 points 15 hours ago
🙂 Daily Quordle 1011
8️⃣7️⃣
4️⃣5️⃣
m-w.com/games/quordle/
⬜⬜⬜⬜⬜ ⬜⬜⬜⬜⬜
⬜⬜⬜🟨🟨 ⬜🟨⬜🟩⬜
⬜⬜🟨🟩⬜ ⬜⬜⬜⬜🟩
⬜🟨⬜⬜⬜ ⬜⬜⬜⬜⬜
⬜⬜🟨🟩⬜ ⬜⬜⬜⬜🟩
⬜⬜⬜🟨⬜ 🟩⬜⬜🟩🟩
⬜🟨⬜⬜⬜ 🟩🟩🟩🟩🟩
🟩🟩🟩🟩🟩 ⬛⬛⬛⬛⬛

🟨🟨⬜⬜⬜ ⬜⬜🟨⬜⬜
⬜⬜⬜⬜⬜ ⬜⬜⬜⬜⬜
🟩🟨🟨⬜⬜ ⬜⬜🟩🟩🟩
🟩🟩🟩🟩🟩 ⬜🟨⬜⬜⬜
⬛⬛⬛⬛⬛ 🟩🟩🟩🟩🟩

1
🙂 Daily Quordle 1011 (www.merriam-webster.com)
[–] [email protected] 0 points 15 hours ago
Connections
Puzzle #508
🟩🟩🟩🟨
🟩🟩🟩🟩
🟨🟨🟨🟨
🟦🟦🟪🟦
🟦🟦🟦🟦
🟪🟪🟪🟪
[–] [email protected] 1 points 16 hours ago (1 children)

Does anyone has more info on that? It was a staple on my phones since almost a decade…

I've had a few apps disappear, and I assume either the app is very outdated / incompatible, or the developer pulled it. If you remember the developer, you could follow up and see what they say?

[–] [email protected] 2 points 19 hours ago (3 children)

I think it's just for comments?

I've been wanting the same for posts for some time now

[–] [email protected] 18 points 1 day ago

Viruses affect other things too, including bacteria! Bacteriophages are the first to come to mind

https://m.youtube.com/watch?v=SbvAaDN1bpE

Sorry to link to a video, but this recent Kurzgesagt video covered your question pretty closely :)

[–] [email protected] 4 points 1 day ago* (last edited 1 day ago)

Français:

Disclosure statement

Raphaël Fischler is a member emeritus of the Ordre des urbanistes du Québec and a Fellow of the Canadian Institute of Planners. He received SSHRC and FRQSC funding in the past for his research in the history of urban planning.

[–] [email protected] 0 points 1 day ago
Strands #241
“How sweet!”
🔵🔵🔵🟡
🔵🔵

No hints today!

[–] [email protected] 0 points 1 day ago
Wordle 1,229 4/6

⬛⬛🟨⬛🟨
🟨🟩⬛🟩⬛
⬛🟩🟩🟩⬛
🟩🟩🟩🟩🟩
 

cross-posted from: https://lemmy.ca/post/31947651

definition: https://opensource.org/ai/open-source-ai-definition

endorsements: https://opensource.org/ai/endorsements

In particular, which tools meet the requirements and which ones don't:

As part of our validation and testing of the OSAID, the volunteers checked whether the Definition could be used to evaluate if AI systems provided the freedoms expected.

  • The list of models that passed the Validation phase are: Pythia (Eleuther AI), OLMo (AI2), Amber and CrystalCoder (LLM360) and T5 (Google).
  • There are a couple of others that were analyzed and would probably pass if they changed their licenses/legal terms: BLOOM (BigScience), Starcoder2 (BigCode), Falcon (TII).
  • Those that have been analyzed and don't pass because they lack required components and/or their legal agreements are incompatible with the Open Source principles: Llama2 (Meta), Grok (X/Twitter), Phi-2 (Microsoft), Mixtral (Mistral).

These results should be seen as part of the definitional process, a learning moment, they're not certifications of any kind. OSI will continue to validate only legal documents, and will not validate or review individual AI systems, just as it does not validate or review software projects.

 

definition: https://opensource.org/ai/open-source-ai-definition

endorsements: https://opensource.org/ai/endorsements

In particular, which tools meet the requirements and which ones don't:

As part of our validation and testing of the OSAID, the volunteers checked whether the Definition could be used to evaluate if AI systems provided the freedoms expected.

  • The list of models that passed the Validation phase are: Pythia (Eleuther AI), OLMo (AI2), Amber and CrystalCoder (LLM360) and T5 (Google).
  • There are a couple of others that were analyzed and would probably pass if they changed their licenses/legal terms: BLOOM (BigScience), Starcoder2 (BigCode), Falcon (TII).
  • Those that have been analyzed and don't pass because they lack required components and/or their legal agreements are incompatible with the Open Source principles: Llama2 (Meta), Grok (X/Twitter), Phi-2 (Microsoft), Mixtral (Mistral).

These results should be seen as part of the definitional process, a learning moment, they're not certifications of any kind. OSI will continue to validate only legal documents, and will not validate or review individual AI systems, just as it does not validate or review software projects.

 

cross-posted from: https://lemmy.ca/post/31913012

My thoughts are summarized by this line

Casey Fiesler, Associate Professor of Information Science at University of Colorado Boulder, told me in a call that while it’s good for physicians to be discouraged from putting patient data into the open-web version of ChatGPT, how the Northwell network implements privacy safeguards is important—as is education for users. “I would hope that if hospital staff is being encouraged to use these tools, that there is some significant education about how they work and how it's appropriate and not appropriate,” she said. “I would be uncomfortable with medical providers using this technology without understanding the limitations and risks. ”

It's good to have an AI model running on the internal network, to help with emails and the such. A model such as Perplexity could be good for parsing research articles, as long as the user clicks the links to follow-up in the sources.

It's not good to use it for tasks that traditional "AI" was already doing, because traditional AI doesn't hallucinate and it doesn't require so much processing power.

It absolutely should not be used for diagnosis or insurance claims.

 

My thoughts are summarized by this line

Casey Fiesler, Associate Professor of Information Science at University of Colorado Boulder, told me in a call that while it’s good for physicians to be discouraged from putting patient data into the open-web version of ChatGPT, how the Northwell network implements privacy safeguards is important—as is education for users. “I would hope that if hospital staff is being encouraged to use these tools, that there is some significant education about how they work and how it's appropriate and not appropriate,” she said. “I would be uncomfortable with medical providers using this technology without understanding the limitations and risks. ”

It's good to have an AI model running on the internal network, to help with emails and the such. A model such as Perplexity could be good for parsing research articles, as long as the user clicks the links to follow-up in the sources.

It's not good to use it for tasks that traditional "AI" was already doing, because traditional AI doesn't hallucinate and it doesn't require so much processing power.

It absolutely should not be used for diagnosis or insurance claims.

view more: next ›