this post was submitted on 16 Jun 2023
5 points (100.0% liked)

Actually Useful AI

2017 readers
7 users here now

Welcome! ๐Ÿค–

Our community focuses on programming-oriented, hype-free discussion of Artificial Intelligence (AI) topics. We aim to curate content that truly contributes to the understanding and practical application of AI, making it, as the name suggests, "actually useful" for developers and enthusiasts alike.

Be an active member! ๐Ÿ””

We highly value participation in our community. Whether it's asking questions, sharing insights, or sparking new discussions, your engagement helps us all grow.

What can I post? ๐Ÿ“

In general, anything related to AI is acceptable. However, we encourage you to strive for high-quality content.

What is not allowed? ๐Ÿšซ

General Rules ๐Ÿ“œ

Members are expected to engage in on-topic discussions, and exhibit mature, respectful behavior. Those who fail to uphold these standards may find their posts or comments removed, with repeat offenders potentially facing a permanent ban.

While we appreciate focus, a little humor and off-topic banter, when tasteful and relevant, can also add flavor to our discussions.

Related Communities ๐ŸŒ

General

Chat

Image

Open Source

Please message @[email protected] if you would like us to add a community to this list.

Icon base by Lord Berandas under CC BY 3.0 with modifications to add a gradient

founded 1 year ago
MODERATORS
 

OpenAI announced these API updates 3 days ago:

  • new function calling capability in the Chat Completions API
  • updated and more steerable versions of gpt-4 and gpt-3.5-turbo
  • new 16k context version of gpt-3.5-turbo (vs the standard 4k version)
  • 75% cost reduction on our state-of-the-art embeddings model
  • 25% cost reduction on input tokens for gpt-3.5-turbo
  • announcing the deprecation timeline for the gpt-3.5-turbo-0301 and gpt-4-0314 models
top 6 comments
sorted by: hot top controversial new old
[โ€“] sisyphean 2 points 1 year ago

gpt-3.5-turbo with the 16k context can now fit about 20 printed pages in its context. This is a game changer for summarization and documentation-based question answerint applications. I tried it in the API playground ant it works really well!

Function calling also seems very useful for tool-using apps. No more crossing fingers and hoping the LLM will return a syntactically valid call!

[โ€“] [email protected] 0 points 1 year ago (1 children)

The only thing is, haven't wearied our lesson with reddit? Using these proprietary APIs are not to be trusted. I don't think I would ever bud anything, even at a hobby or experimental level that relied on this.

[โ€“] Denaton 2 points 1 year ago* (last edited 1 year ago) (1 children)

Until someone trains a model (and it will happen) that match or outperform GPT4 and i can run i locally, i will use this to experiment and prototype random stuff that i find interesting ^^

Edit; a big difference here too is that Reddit just fetch data from an database, it's just as expensive as the main site and app while GPT is generative and use quite lot of RAM and VRAM per action.

[โ€“] [email protected] 1 points 1 year ago (1 children)

There are open source LLMs. I am not saying it is wrong to consume LLM as a service, the issue is openAI seems intent on intent on not being very open.

[โ€“] Denaton 2 points 1 year ago* (last edited 1 year ago) (1 children)

Ah, i think there is a miss understanding of their name, it's not open as in open source, it's open as in open research. They publish all their research for others to duplicate. And yes, there is other models out there, but on one as good as GPT4. Unless you have a computer with 640 ram, you can't run it. So yeah, compared to fetching data from a database that could be done on a Raspberry and generating data that requires a monster computer, i understand that they wanna put a price on the API.

[โ€“] [email protected] 1 points 1 year ago

Well yeah, I do get that they conduct open research, but I still think it is disingenuous for the company to not release their source code. Or at least their LLM, if we are going to be somewhat charitable and allow that their specific tooling and API infrastructure should be proprietary so that they can maintain a business. There is no guaranteed that the code running on the other end of their API adheres to any of the research that they have revealed!

I'm not too worried though, because other LLMs and parameter sets have gone open source, so the cat is already out of the bag. I also don't really believe in the commercial viability of LLMs either, because there is no way to automate the verification that that they are generating correct content so whatever.