Hopps

joined 1 year ago
MODERATOR OF
[–] [email protected] 5 points 1 year ago

On Android the Firefox app allows the use of extensions including all the adblock options you would find on PC. Works great and for YouTube as well.

[–] [email protected] 3 points 1 year ago

Poltergeist

[–] [email protected] 18 points 1 year ago

According to the lore, demons are fallen angels so you can keep this narrative going

[–] [email protected] 8 points 1 year ago

I had Lyme Disease and it sucked, felt like a zombie walking through a sick foggy dream world until I got antibiotics. Luckily I noticed pretty quick it was from a deer tick. I hope your pupper gets treated and has a full recovery 😁

[–] [email protected] 2 points 1 year ago (1 children)

Haha I can hear the sad music now, while slaying away those Elder Willows. There was something almost meditative about collecting those DB's.

At one point I set up a bot to collect them, he grinded away so long on those that I swear he reached somewhere in the 90's for his level

[–] [email protected] 2 points 1 year ago (3 children)

Your username gave me some major nostalgia. I think I played iRO like nearly 20 years ago! Even rolled a Super Novice lol

[–] [email protected] 5 points 1 year ago* (last edited 1 year ago)

omg this got me off hard*

FTFY

[–] [email protected] 12 points 1 year ago (1 children)

You're correct, the entire system is already in place. The only thing that is currently missing is adding up all of someone's 'karma' from their their posts and having it shown on their profile. Some of the apps already have this implemented since it's easy to incorporate.

[–] [email protected] 3 points 1 year ago

This is a great summary of the fall of Elon Musk. I admired him too years ago until he went off the rails. RIP to the better decisions and the their benefits that this man could have made.

[–] [email protected] 22 points 1 year ago

I hope there's an option to see the upvotes as well as downvotes rather than just the total. That's one thing I've liked about Jerboa so far that other apps aren't doing.

[–] [email protected] 5 points 1 year ago

Do you also have a bird named Kazooie?

[–] [email protected] 19 points 1 year ago (1 children)

Always has bean

 

cross-posted from: https://lemmy.world/post/937094

  • There are no circulating copies of the album online and it cannot be commercially exploited until 2103, but it can be played at listening parties.

  • It took about six years to record, with features from the entire Wu-Tang Clan, Redman, Cher, and even FC Barcelona soccer players and a Game of Thrones actress.

  • The album is unique with only one physical copy in existence, making it the most expensive work of music ever sold.

  • The album was bought by Turing Pharmaceuticals CEO, Martin Shkreli, for $2 million, who later lost it when his assets were seized following his conviction for securities fraud.

  • In 2021, it was bought by PleasrDAO, a non-fungible token (NFT) collectors' group, for $4 million to cover Shkreli's debts. PleasrDAO hopes to make it more accessible within the confines of 'listening parties'.

 
  • There are no circulating copies of the album online and it cannot be commercially exploited until 2103, but it can be played at listening parties.

  • It took about six years to record, with features from the entire Wu-Tang Clan, Redman, Cher, and even FC Barcelona soccer players and a Game of Thrones actress.

  • The album is unique with only one physical copy in existence, making it the most expensive work of music ever sold.

  • The album was bought by Turing Pharmaceuticals CEO, Martin Shkreli, for $2 million, who later lost it when his assets were seized following his conviction for securities fraud.

  • In 2021, it was bought by PleasrDAO, a non-fungible token (NFT) collectors' group, for $4 million to cover Shkreli's debts. PleasrDAO hopes to make it more accessible within the confines of 'listening parties'.

 
  • There are no circulating copies of the album online and it cannot be commercially exploited until 2103, but it can be played at listening parties.

  • It took about six years to record, with features from the entire Wu-Tang Clan, Redman, Cher, and even FC Barcelona soccer players and a Game of Thrones actress.

  • The album is unique with only one physical copy in existence, making it the most expensive work of music ever sold.

  • The album was bought by Turing Pharmaceuticals CEO, Martin Shkreli, for $2 million, who later lost it when his assets were seized following his conviction for securities fraud.

  • In 2021, it was bought by PleasrDAO, a non-fungible token (NFT) collectors' group, for $4 million to cover Shkreli's debts. PleasrDAO hopes to make it more accessible within the confines of 'listening parties'.

 
  • There are no circulating copies of the album online and it cannot be commercially exploited until 2103, but it can be played at listening parties.

  • It took about six years to record, with features from the entire Wu-Tang Clan, Redman, Cher, and even FC Barcelona soccer players and a Game of Thrones actress.

  • The album is unique with only one physical copy in existence, making it the most expensive work of music ever sold.

  • The album was bought by Turing Pharmaceuticals CEO, Martin Shkreli, for $2 million, who later lost it when his assets were seized following his conviction for securities fraud.

  • In 2021, it was bought by PleasrDAO, a non-fungible token (NFT) collectors' group, for $4 million to cover Shkreli's debts. PleasrDAO hopes to make it more accessible within the confines of 'listening parties'.

 
 

TLDR Summary:

  • MIT researchers developed a 350-million-parameter self-training entailment model to enhance smaller language models' capabilities, outperforming larger models with 137 to 175 billion parameters without human-generated labels.

  • The researchers enhanced the model's performance using 'self-training,' where it learns from its own predictions, reducing human supervision and outperforming models like Google's LaMDA, FLAN, and GPT models.

  • They developed an algorithm called 'SimPLE' to review and correct noisy or incorrect labels generated during self-training, improving the quality of self-generated labels and model robustness.

  • This approach addresses inefficiency and privacy issues of larger AI models while retaining high performance. They used 'textual entailment' to train these models, improving their adaptability to different tasks without additional training.

  • By reformulating natural language understanding tasks like sentiment analysis and news classification as entailment tasks, the model's applications were expanded.

  • While the model showed limitations in multi-class classification tasks, the research still presents an efficient method for training large language models, potentially reshaping AI and machine learning.

 

TLDR Summary:

  • MIT researchers developed a 350-million-parameter self-training entailment model to enhance smaller language models' capabilities, outperforming larger models with 137 to 175 billion parameters without human-generated labels.

  • The researchers enhanced the model's performance using 'self-training,' where it learns from its own predictions, reducing human supervision and outperforming models like Google's LaMDA, FLAN, and GPT models.

  • They developed an algorithm called 'SimPLE' to review and correct noisy or incorrect labels generated during self-training, improving the quality of self-generated labels and model robustness.

  • This approach addresses inefficiency and privacy issues of larger AI models while retaining high performance. They used 'textual entailment' to train these models, improving their adaptability to different tasks without additional training.

  • By reformulating natural language understanding tasks like sentiment analysis and news classification as entailment tasks, the model's applications were expanded.

  • While the model showed limitations in multi-class classification tasks, the research still presents an efficient method for training large language models, potentially reshaping AI and machine learning.

 

TLDR summary:

  1. Researchers at MIT and Tufts University have developed an AI model called ConPLex that can screen over 100 million drug compounds in a day to predict their interactions with target proteins. This is much faster than existing computational methods and could significantly speed up the drug discovery process.

  2. Most existing computational drug screening methods calculate the 3D structures of proteins and drug molecules, which is very time-consuming. The new ConPLex model uses a language model to analyze amino acid sequences and drug compounds and predict their interactions without needing to calculate 3D structures.

  3. The ConPLex model was trained on a database of over 20,000 proteins to learn associations between amino acid sequences and structures. It represents proteins and drug molecules as numerical representations that capture their important features. It can then determine if a drug molecule will bind to a protein based on these numerical representations alone.

  4. The researchers enhanced the model using a technique called contrastive learning, in which they trained the model to distinguish real drug-protein interactions from decoys that look similar but do not actually interact. This makes the model less likely to predict false interactions.

  5. The researchers tested the model by screening 4,700 drug candidates against 51 protein kinases. Experiments confirmed that 12 of the 19 top hits had strong binding, including 4 with extremely strong binding. The model could be useful for screening drug toxicity and other applications.

  6. The new model could significantly reduce drug failure rates and the cost of drug development. It represents a breakthrough in predicting drug-target interactions and could be further improved by incorporating more data and molecular generation methods.

  7. The model and data used in this research have been made publicly available for other scientists to use.

 

TLDR summary:

  1. Researchers at MIT and Tufts University have developed an AI model called ConPLex that can screen over 100 million drug compounds in a day to predict their interactions with target proteins. This is much faster than existing computational methods and could significantly speed up the drug discovery process.

  2. Most existing computational drug screening methods calculate the 3D structures of proteins and drug molecules, which is very time-consuming. The new ConPLex model uses a language model to analyze amino acid sequences and drug compounds and predict their interactions without needing to calculate 3D structures.

  3. The ConPLex model was trained on a database of over 20,000 proteins to learn associations between amino acid sequences and structures. It represents proteins and drug molecules as numerical representations that capture their important features. It can then determine if a drug molecule will bind to a protein based on these numerical representations alone.

  4. The researchers enhanced the model using a technique called contrastive learning, in which they trained the model to distinguish real drug-protein interactions from decoys that look similar but do not actually interact. This makes the model less likely to predict false interactions.

  5. The researchers tested the model by screening 4,700 drug candidates against 51 protein kinases. Experiments confirmed that 12 of the 19 top hits had strong binding, including 4 with extremely strong binding. The model could be useful for screening drug toxicity and other applications.

  6. The new model could significantly reduce drug failure rates and the cost of drug development. It represents a breakthrough in predicting drug-target interactions and could be further improved by incorporating more data and molecular generation methods.

  7. The model and data used in this research have been made publicly available for other scientists to use.

1
AI Translates 5000-Year-Old Cuneiform (www-timesofisrael-com.cdn.ampproject.org)
 

A team from Israel has developed an AI model that translates Cuneiform, a 5000-year-old writing system, into English within seconds. This model, developed at Tel Aviv University, uses Neural Machine Translation (NMT) and has fairly good accuracy. Despite the complexity of the language and age, the AI was successfully trained and can now help to uncover the mysteries of the past. You can try an early demo of this model on The Babylon Engine and its source code is available on GitHub on Akkademia and the Colaboratory.

view more: next ›