this post was submitted on 05 Jul 2023
23 points (96.0% liked)

Linux

48365 readers
1317 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
 

I'm looking for an open-source alternative to ChatGPT which is community-driven. I have seen some open-source large language models, but they're usually still made by some organizations and published after the fact. Instead, I'm looking for one where anyone can participate: discuss ideas on how to improve the model, write code, or donate computational resources to build it. Is there such a project?

top 11 comments
sorted by: hot top controversial new old
[–] [email protected] 4 points 1 year ago

Have a look at this paper from MS research -> https://www.microsoft.com/en-us/research/publication/orca-progressive-learning-from-complex-explanation-traces-of-gpt-4/

“ Recent research has focused on enhancing the capability of smaller models through imitation learning, drawing on the outputs generated by large foundation models (LFMs). A number of issues impact the quality of these models, ranging from limited imitation signals from shallow LFM outputs; small scale homogeneous training data; and most notably a lack of rigorous evaluation resulting in overestimating the small model’s capability as they tend to learn to imitate the style, but not the reasoning process of LFMs. To address these challenges, we develop Orca, a 13-billion parameter model that learns to imitate the reasoning process of LFMs. Orca learns from rich signals from GPT 4 including explanation traces; step-by-step thought processes; and other complex instructions, guided by teacher assistance from ChatGPT. To promote this progressive learning, we tap into large-scale and diverse imitation data with judicious sampling and selection. Orca surpasses conventional state-of-the-art instruction-tuned models such as Vicuna-13B by more than 100% in complex zero-shot reasoning benchmarks like Big-Bench Hard (BBH) and 42% on AGIEval. Moreover, Orca reaches parity with ChatGPT on the BBH benchmark and shows competitive performance (4 pts gap with optimized system message) in professional and academic examinations like the SAT, LSAT, GRE, and GMAT, both in zero-shot settings without CoT; while trailing behind GPT–4. Our research indicates that learning from step-by-step explanations, whether these are generated by humans or more advanced AI models, is a promising direction to improve model capabilities and skills.”

[–] [email protected] 2 points 1 year ago
[–] [email protected] 1 points 1 year ago

The LMSYS group does some interesting benchmarks of a variety of LLM's: https://lmsys.org/blog/

[–] [email protected] 1 points 1 year ago
[–] [email protected] 1 points 1 year ago

Something under a copyleft (reciprocal) license would be good, anybody knows if it exist?

[–] Pizzarules668 1 points 1 year ago

I don't know if this is exactly what your looking for but Falcon LLM looks promising. I've never used it but it may work.

[–] [email protected] 1 points 1 year ago

At this point, I'd like to see better regulation about usage of user data for training before this gets approached by the FOSS community. Ideally we should see a regulatory bloodbath where AI training data is concerned (using other people's data or creation without explicit consent, and ultimately regurgitating that data, as LLMs do).

*I don't think we'll ever see sufficient regulation at all -- but we should. Use of data in the way needed and quantities needed clearly call for it, in my view

[–] [email protected] 1 points 1 year ago (2 children)

@lily33 Yes. There are a number I believe that fit these criteria hosted on Hugging Face I feel. Bloom is the first one that came to mind.

Ironically I asked ChatGPT this question and it responded to check out EleutherAI. I do not know anything about that group but looks like they may have helped worked on Bloom, so maybe they are worthy of consideration. Anyway here is Bloom.

https://huggingface.co/blog/bloom-megatron-deepspeed

[–] [email protected] 1 points 1 year ago

To add to this, here is another model that seems to aim to to be a poor man’s chatgpt: https://huggingface.co/togethercomputer/GPT-NeoXT-Chat-Base-20B

[–] [email protected] 1 points 1 year ago (1 children)

HuggingFace looks to me like it's a corporation. Like, when I click on "about > join us", I'm sent to their job offer page.

[–] [email protected] 3 points 1 year ago* (last edited 1 year ago)

@lily33 like Github, HuggingFace is a private company that can host public models. I'm pretty sure this one is fully public. But you're right that it does look like someone from HF started it so perhaps it does not meet your criteria after all. My apologies if so.

I was (am?) of the understanding though that Bloom is being researched openly such that it can be reproduced locally (and contributed to on HF)