this post was submitted on 09 Jul 2023
9 points (73.7% liked)

Actually Useful AI

2014 readers
7 users here now

Welcome! ๐Ÿค–

Our community focuses on programming-oriented, hype-free discussion of Artificial Intelligence (AI) topics. We aim to curate content that truly contributes to the understanding and practical application of AI, making it, as the name suggests, "actually useful" for developers and enthusiasts alike.

Be an active member! ๐Ÿ””

We highly value participation in our community. Whether it's asking questions, sharing insights, or sparking new discussions, your engagement helps us all grow.

What can I post? ๐Ÿ“

In general, anything related to AI is acceptable. However, we encourage you to strive for high-quality content.

What is not allowed? ๐Ÿšซ

General Rules ๐Ÿ“œ

Members are expected to engage in on-topic discussions, and exhibit mature, respectful behavior. Those who fail to uphold these standards may find their posts or comments removed, with repeat offenders potentially facing a permanent ban.

While we appreciate focus, a little humor and off-topic banter, when tasteful and relevant, can also add flavor to our discussions.

Related Communities ๐ŸŒ

General

Chat

Image

Open Source

Please message @[email protected] if you would like us to add a community to this list.

Icon base by Lord Berandas under CC BY 3.0 with modifications to add a gradient

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] varsock 4 points 1 year ago (1 children)

I used this yesterday, without realizing it is unable to use the comment I am replying to as context and I looked real stupid in front of all my new Lemmy friends ๐Ÿ˜ญ lmao

[โ€“] ruffsl 2 points 1 year ago (1 children)

It would be kind of cool for the bot to consider the entire chain of comments up to and including one's reply that pings the bot. Although perhaps that feature could be gated behind a command argument, like when commanding bot accounts from GitHub comments for CI/CD. That could gard against unintentional prompt injection or dilution of context with longer reply chains. Thoughts @[email protected] ?

[โ€“] [email protected] 3 points 1 year ago (1 children)

I'd like that as well, but my problem with that is that it can rack up cost real quick because you're billed per every token in the conversation, I once managed to incur cost of $5 in a single conversation of about 37 messages.

I'm planning for the bot to have the ability to set a custom api key, meaning every user could potentially provide their own api key and pay for their own responses and in that case using the whole chain is possible, but as long as I'm paying for all the responses of everyone on Lemmy, that's unlikely to happen.

[โ€“] varsock 1 points 1 year ago (1 children)

i didnt know you paid for it ๐Ÿ˜ฏ. thank you ๐Ÿฅน if I was in the position to contribute I would. for now I'll just refrain from using to to drive up your costs

[โ€“] [email protected] 3 points 1 year ago

Feel free to use it, that's why I made it! I have measures in place to make sure I don't spend more than I'm willing to.