Akisamb

joined 2 years ago
MODERATOR OF
[–] Akisamb 3 points 11 months ago

Yes to your question, but that's not what I was saying.

Here is one of the most popular training datasets : https://pile.eleuther.ai/

If you look at the pdf describing the dataset, you'll find the mean length of these documents to be somewhat short with mean length being less than 20kb (20000 characters) for most documents.

You are asking for a model to retain a memory for the whole duration of a discussion, which can be very long. If I chat for one hour I'll type approximately 8400 words, or around 42KB. Longer than most documents in the training set. If I chat for 20 hours, It'll be longer than almost all the documents in the training set. The model needs to learn how to extract information from a long context and it can't do that well if the documents on which it trained are short.

You are also right that during training the text is cut off. A value I often see is 2k to 8k tokens. This is arbitrary, some models are trained with a cut off of 200k tokens. You can use models on context lengths longer than that what they were trained on (with some caveats) but performance falls of badly.

[–] Akisamb 2 points 11 months ago (2 children)

There are two issues with large prompts. One is linked to the current language technology, were the computation time and memory usage scale badly with prompt size. This is being solved by projects such as RWKV or mamba, but these remain unproven at large sizes (more than 100 billion parameters). Somebody will have to spend some millions to train one.

The other issue will probably be harder to solve. There is less high quality long context training data. Most datasets were created for small context models.

[–] Akisamb 4 points 11 months ago* (last edited 11 months ago)

For folks who aren’t sure how to interpret this, what we’re looking at here is early work establishing an upper bound on the complexity of a problem that a model can handle based on its size. Research like this is absolutely essential for determining whether these absurdly large models are actually going to achieve the results people have already ascribed to them on any sort of consistent basis. Previous work on monosemanticity and superposition are relevant here, particularly with regards to unpacking where and when these errors will occur.

I'm not sure this work accomplishes that. Sure, it builds up on previous work that showed that a transformer can be simulated by a TC^0^ family. However, the limits of this fact are not clear. The paper even admits as such

Our result on the limitations of T-LLMs as general learners comes from Proposition 1 and Theorem 2. On the one hand, T-LLMs are within the TC^0^ complexity family; on the other hand, general learners require at least as hard as P/ poly-complete. In the field of circuit theory, it is known that TC^0^ is a subset of P/ poly and commonly believed that TC^0^ is a strict subset of P/ poly, though the strictness is still an open problem to be proved.

I believe this is one of the weakest points of the paper, as it bases all of its reasoning on an unproven theorem. And you can implement many things with a TC^0^ algorithm, addition, multiplication, basic logic, heck you can even make transformers.

There still is something that bothers me. Why did it define general learning as being at least a universal circuit for the set of all circuits within a polynomial size ? Why this restriction ? I tried googling general learner and universal circuit and only came up with this paper.

While searching, I found that this paper was rejected, you can find the reviews here : https://openreview.net/forum?id=e5lR6tySR7

If you are searching for a paper on the limits of T-LLMs the paper What Algorithms can Transformers Learn? A Study in Length Generalization may prove more informative. https://arxiv.org/pdf/2310.16028.pdf It explains why transformers are so bad at addition.

Here is the key part of their abstract :

Specifically, we leverage RASP (Weiss et al., 2021)— a programming language designed for the computational model of a Transformer— and introduce the RASP-Generalization Conjecture: Transformers tend to length generalize on a task if the task can be solved by a short RASP program which works for all input lengths.

[–] Akisamb 13 points 11 months ago

Didier Raoult for a large part. He was the one who published the paper that really started this whole mess. His shoddy research practices and non-respect for patients did plenty of harm.

Good thing that they've forced his retirement.

[–] Akisamb 6 points 11 months ago (1 children)

Hard to say from the article only, but if it is like the status quo in the EU and USA, then only the training data can be illegally obtained. If I have an AI that is able to say verbatim the script of the Bee movie, I will be sued.

Google books had a similar issue. They scanned pretty much all the books in existence and indexed them. Small issue they did not obtain the consent of the copyright holders before doing this. They were sued and won. You can use copyrighted data as long as you do not provide Access to it.

[–] Akisamb 10 points 11 months ago

To avoid people being homeless ?

[–] Akisamb 4 points 1 year ago (2 children)

Les associations d'anti-corruption s'intéressent principalement au gouvernement, ce qui est bien normal. Je pense que le mieux est que tu juges leur travail toi-même :

(source pour leur habilitation il me semblait d'avoir lu le nom d'une troisième association, mais je me souviens peut-être mal)

D'ailleurs, transperency france est pour le renouvèlement de l'accord pour Anticor. Je n'ai pas trouvé la position de sherpa.

[–] Akisamb 7 points 1 year ago (4 children)

Il reste trois associations d'anticorruption qui ont cet agrément. Ce qui est reproché à anticor, c'est leur aspect politique, ils se doivent de poursuivre des gens de tous les partis, ils n'ont pas le droit de se rapprocher d'une organisation politique.

Ainsi, certains aspects de leur organisation sont un petit peu moyens, comme le don anonyme de 80000 euros.

Après de ce que j'ai vu, l'association semble avoir fait des efforts pour augmenter leur transparence. Visiblement le choix va être fait par un tribunal administratif, donc on va avoir le fin mot de l'histoire.

Face à ce refus du gouvernement, Anticor se tournera à son tour devant le juge administratif. « Nous sommes quelque part soulagés de pouvoir enfin démontrer que nous remplissons bien tous les critères pour pouvoir être agréés », assure la présidente.

[–] Akisamb 6 points 1 year ago (1 children)
[–] Akisamb 2 points 1 year ago (1 children)

Nous avons découvert (…) dans la presse un certain nombre de choses extrêmement troublantes, notamment qu’il aurait été promis, négocié en échange de votes de soutien à ce texte, des casernes de gendarmerie, des postes de police et que sais-je encore

Beurk, du clientélisme. A la fin c'est ceux qui ont les élus les plus relous et les moins coopératifs qui se retrouvent avec tout les services.

[–] Akisamb 7 points 1 year ago

I think it's healthy to have clear boundaries with coworkers, they are not the same things as friends.

That said I spend 41 hours a week working, no way I'm not going to socialise with my coworkers. If I don't make any friends after several years of working at a place I feel I have done something wrong.

[–] Akisamb 2 points 1 year ago (1 children)

Oui, après pas sûr que la majorité accepte.

view more: ‹ prev next ›