sisyphean

joined 1 year ago
MODERATOR OF
[–] sisyphean 3 points 1 year ago (1 children)

Thank you! I posted it from my phone and I didn't see. I updated the post to use the direct link.

[–] sisyphean 1 points 1 year ago

Nice try /u/spez

[–] sisyphean 2 points 1 year ago

Wow, an actually good summary of what the problem is with Reddit

[–] sisyphean 1 points 1 year ago

There’s at least one in almost every paragraph of the introduction. He seems to be satisfied with himself a bit more than what is tasteful, but it’s hard to argue that the product is amazing.

[–] sisyphean 2 points 1 year ago (2 children)

It wasn’t easy, but finally I could reverse-engineer its algorithm:

function ask(question) {
    return "…";
}
[–] sisyphean 2 points 1 year ago

Maybe the next bot I write should be GoodBotBadBotBot

[–] sisyphean 4 points 1 year ago (1 children)

I think the incentives are a bit different here. If we can keep the threadiverse nonprofit, and contribute to the maintenance costs of the servers, it might stay a much friendlier place than Reddit.

[–] sisyphean 6 points 1 year ago (1 children)

We should do an AmA with her!

[–] sisyphean 2 points 1 year ago

Lemmy actually has a really good API. Moderation tools are pretty simple though.

[–] sisyphean 2 points 1 year ago

And it works in the other direction too:)

[–] sisyphean 3 points 1 year ago (2 children)

The repetition of the chorus emphasizes the singer’s unwavering dedication to the relationship.

I’m dying 🤣

[–] sisyphean 3 points 1 year ago (4 children)

@AutoTLDR please 😂

 
 
 

Here is the link to the example epubs:

https://github.com/mshumer/gpt-author/tree/main/example_novel_outputs

I’m not sure how I feel about this project.

 

TL;DR (by GPT-4 🤖):

The article titled "It’s infuriatingly hard to understand how closed models train on their input" discusses the concerns and lack of transparency surrounding the training data used by large language models like GPT-3, GPT-4, Google's PaLM, and Anthropic's Claude. The author expresses frustration over the inability to definitively state that private data passed to these models isn't being used to train future versions due to the lack of transparency from the vendors. The article also highlights OpenAI's policy that data submitted by API users is not used to train their models or improve their services. However, the author points out that the policy is relatively new and data submitted before March 2023 may have been used if the customer hadn't opted out. The article also brings up potential security risks with AI vendors logging inputs and the possibility of data breaches. The author suggests that openly licensed models that can be run on personal hardware may be a solution to these concerns.

 

cross-posted from: https://programming.dev/post/177822

It's coming along nicely, I hope I'll be able to release it in the next few days.

Screenshot:

How It Works:

I am a bot that generates summaries of Lemmy comments and posts.

  • Just mention me in a comment or post, and I will generate a summary for you.
  • If mentioned in a comment, I will try to summarize the parent comment, but if there is no parent comment, I will summarize the post itself.
  • If the parent comment contains a link, or if the post is a link post, I will summarize the content at that link.
  • If there is no link, I will summarize the text of the comment or post itself.

Extra Info in Comments:

Prompt Injection:

Of course it's really easy (but mostly harmless) to break it using prompt injection:

It will only be available in communities that explicitly allow it. I hope it will be useful, I'm generally very satisfied with the quality of the summaries.

 

cross-posted from: https://programming.dev/post/177822

It's coming along nicely, I hope I'll be able to release it in the next few days.

Screenshot:

How It Works:

I am a bot that generates summaries of Lemmy comments and posts.

  • Just mention me in a comment or post, and I will generate a summary for you.
  • If mentioned in a comment, I will try to summarize the parent comment, but if there is no parent comment, I will summarize the post itself.
  • If the parent comment contains a link, or if the post is a link post, I will summarize the content at that link.
  • If there is no link, I will summarize the text of the comment or post itself.

Extra Info in Comments:

Prompt Injection:

Of course it's really easy (but mostly harmless) to break it using prompt injection:

It will only be available in communities that explicitly allow it. I hope it will be useful, I'm generally very satisfied with the quality of the summaries.

49
submitted 1 year ago* (last edited 1 year ago) by sisyphean to c/auai
 

It's coming along nicely, I hope I'll be able to release it in the next few days.

Screenshot:

How It Works:

I am a bot that generates summaries of Lemmy comments and posts.

  • Just mention me in a comment or post, and I will generate a summary for you.
  • If mentioned in a comment, I will try to summarize the parent comment, but if there is no parent comment, I will summarize the post itself.
  • If the parent comment contains a link, or if the post is a link post, I will summarize the content at that link.
  • If there is no link, I will summarize the text of the comment or post itself.

Extra Info in Comments:

Prompt Injection:

Of course it's really easy (but mostly harmless) to break it using prompt injection:

It will only be available in communities that explicitly allow it. I hope it will be useful, I'm generally very satisfied with the quality of the summaries.

 

Link to original tweet:

https://twitter.com/sayashk/status/1671576723580936193?s=46&t=OEG0fcSTxko2ppiL47BW1Q

Screenshot:

Transcript:

I'd heard that GPT-4's image analysis feature wasn't available to the public because it could be used to break Captcha.

Turns out it's true: The new Bing can break captcha, despite saying it won't: (image)

 

cross-posted from: https://programming.dev/post/158037

This is a fascinating discussion of the relationship between goals and intelligence from an AI safety perspective.

I asked my trusty friend GPT-4 to summarize the video (I downloaded the subtitles and fed them into ChatGPT), but I highly recommend just watching the entire thing if you have the time.

Summary by GPT-4:

Introduction:

  • The video aims to respond to some misconceptions about the Orthogonality Thesis in Artificial General Intelligence (AGI) safety.
  • This arises from a thought experiment where an AGI has a simple goal of collecting stamps, which could cause problems due to unintended consequences.

Understanding 'Is' and 'Ought' Statements (Hume's Guillotine):

  • The video describes the concept of 'Is' and 'Ought' statements. 'Is' statements are about how the world is or will be, while 'Ought' statements are about how the world should be or what we want.
  • Hume's Guillotine suggests that you can never derive an 'Ought' statement using only 'Is' statements. To derive an 'Ought' statement, you need at least one other 'Ought' statement.

Defining Intelligence:

  • Intelligence in AGI systems refers to the ability to take actions in the world to achieve their goals or maximize their utility functions.
  • This involves having or building an accurate model of reality, using it to make predictions, and choosing the best possible actions.
  • These actions are determined by the system's goals, which are 'Ought' statements.

Are Goals Stupid?

  • Some commenters suggested that single-mindedly pursuing one goal (like stamp collecting) is unintelligent.
  • However, this only seems unintelligent from a human perspective with different goals.
  • Intelligence is separate from goals; it is the ability to reason about the world to achieve these goals, whatever they may be.

Can AGIs Choose Their Own Goals?

  • The video suggests that while AGIs can choose their own instrumental goals, changing terminal goals is rare and generally undesirable.
  • Terminal goals can't be considered "stupid", as they can't be judged against anything. They're simply the goals the system has.

Can AGIs Reason About Morality?

  • While a superintelligent AGI could understand human morality, it doesn't mean it would act according to it.
  • Its actions are determined by its terminal goals, not its understanding of human ethics.

The Orthogonality Thesis:

  • The Orthogonality Thesis suggests that any level of intelligence is compatible with any set of goals.
  • The level of intelligence is about effectiveness at answering 'Is' questions, and goals are about 'Ought' questions.
  • Therefore, it's possible to create a powerful intelligence that will pursue any specified goal.
  • The level of an agent's intelligence doesn't determine its goals and vice versa.
 

This is a fascinating discussion of the relationship between goals and intelligence from an AI safety perspective.

I asked my trusty friend GPT-4 to summarize the video (I downloaded the subtitles and fed them into ChatGPT), but I highly recommend just watching the entire thing if you have the time.

Summary by GPT-4:

Introduction:

  • The video aims to respond to some misconceptions about the Orthogonality Thesis in Artificial General Intelligence (AGI) safety.
  • This arises from a thought experiment where an AGI has a simple goal of collecting stamps, which could cause problems due to unintended consequences.

Understanding 'Is' and 'Ought' Statements (Hume's Guillotine):

  • The video describes the concept of 'Is' and 'Ought' statements. 'Is' statements are about how the world is or will be, while 'Ought' statements are about how the world should be or what we want.
  • Hume's Guillotine suggests that you can never derive an 'Ought' statement using only 'Is' statements. To derive an 'Ought' statement, you need at least one other 'Ought' statement.

Defining Intelligence:

  • Intelligence in AGI systems refers to the ability to take actions in the world to achieve their goals or maximize their utility functions.
  • This involves having or building an accurate model of reality, using it to make predictions, and choosing the best possible actions.
  • These actions are determined by the system's goals, which are 'Ought' statements.

Are Goals Stupid?

  • Some commenters suggested that single-mindedly pursuing one goal (like stamp collecting) is unintelligent.
  • However, this only seems unintelligent from a human perspective with different goals.
  • Intelligence is separate from goals; it is the ability to reason about the world to achieve these goals, whatever they may be.

Can AGIs Choose Their Own Goals?

  • The video suggests that while AGIs can choose their own instrumental goals, changing terminal goals is rare and generally undesirable.
  • Terminal goals can't be considered "stupid", as they can't be judged against anything. They're simply the goals the system has.

Can AGIs Reason About Morality?

  • While a superintelligent AGI could understand human morality, it doesn't mean it would act according to it.
  • Its actions are determined by its terminal goals, not its understanding of human ethics.

The Orthogonality Thesis:

  • The Orthogonality Thesis suggests that any level of intelligence is compatible with any set of goals.
  • The level of intelligence is about effectiveness at answering 'Is' questions, and goals are about 'Ought' questions.
  • Therefore, it's possible to create a powerful intelligence that will pursue any specified goal.
  • The level of an agent's intelligence doesn't determine its goals and vice versa.
 

This video shows a really nice and clear example of refactoring an anemic domain model into a rich one.

view more: ‹ prev next ›