sisyphean

joined 1 year ago
MODERATOR OF
 

Starting today, all paying API customers have access to GPT-4. In March, we introduced the ChatGPT API, and earlier this month we released our first updates to the chat-based models. We envision a future where chat-based models can support any use case. Today we’re announcing a deprecation plan for older models of the Completions API, and recommend that users adopt the Chat Completions API.

[–] sisyphean 2 points 1 year ago

If you are interested in AI safety - whether you agree with the recent emphasis on it or not - I recommend watching at least a couple of videos by Robert Miles:

https://www.youtube.com/@RobertMilesAI

His videos are very enjoyable and interesting, and he presents a compelling argument for taking AI safety seriously.

Unfortunately, I haven't found such a high-quality source presenting arguments for the opposing view. If anyone knows of one, I encourage them to share it.

6
submitted 1 year ago by sisyphean to c/auai
 

Some interesting quotes:

  1. LLMs do both of the things that their promoters and detractors say they do.
  2. They do both of these at the same time on the same prompt.
  3. It is very difficult from the outside to tell which they are doing.
  4. Both of them are useful.

When a search engine is able to do this, it is able to compensate for a limited index size with intelligence. By making reasonable inferences about what page text is likely to satisfy what query text, it can satisfy more intents with fewer documents.

LLMs are not like this. The reasoning that they do is inscrutable and massive. They do not explain their reasoning in a way that we can trust is actually their reasoning, and not simply a textual description of what such reasoning might hypothetically be.

@AutoTLDR

 

If you are like me, and you didn't immediately understand why people rave about Copilot, these simple examples by Simon Willison may be useful to you:

 

We need scientific and technical breakthroughs to steer and control AI systems much smarter than us. To solve this problem within four years, we’re starting a new team, co-led by Ilya Sutskever and Jan Leike, and dedicating 20% of the compute we’ve secured to date to this effort. We’re looking for excellent ML researchers and engineers to join us.

@[email protected]

[–] sisyphean 1 points 1 year ago

Thanks, I hate it

Seriously, in what way is this jumbled mess on the top of the page better than the table?

 
 
[–] sisyphean 2 points 1 year ago

This looks great!

[–] sisyphean 1 points 1 year ago

LLMs can do a surprisingly good job even if the text extracted from the PDF isn't in the right reading order.

Another thing I've noticed is that figures are explained thoroughly most of the time in the text so there is no need for the model to see them in order to generate a good summary. Human communication is very redundant and we don't realize it.

[–] sisyphean 3 points 1 year ago

Oh finally. Sorry everyone for this train wreck of a thread.

[–] sisyphean 1 points 1 year ago (2 children)
[–] sisyphean 1 points 1 year ago* (last edited 1 year ago) (4 children)

Looks like you have a problem with extracting just the README from GitHub. Let's see if you can read the raw link: https://raw.githubusercontent.com/0xpayne/gpt-migrate/main/README.md

[–] sisyphean 2 points 1 year ago (6 children)
 

I haven't tried this yet, but I have a feeling that it would fail for anything nontrivial. Nevertheless, the concept is very interesting, and as soon as I get API access to GPT-4, I will try it.

I've recently ported a library from TypeScript to Python with the help of ChatGPT (GPT-4), and it took me about a day. It would be interesting to run this tool on the same codebase and compare the results.

If anyone has GPT-4 API access, I would really appreciate if they tried running this tool on something simple, and wrote about the result in the comments.

[–] sisyphean 2 points 1 year ago

Seems like it isn’t:

the same technology under the hood of Google Translate

[–] sisyphean 3 points 1 year ago

This is incredible, thanks for sharing it!

[–] sisyphean 4 points 1 year ago (1 children)

If I remember correctly, the properties the API returns are comment_score and post_score.

 

@AutoTLDR

[–] sisyphean 6 points 1 year ago (3 children)

Lemmy does have karma, it is stored in the DB, and the API returns it. It just isn’t displayed on the UI.

 

As of July 3, 2023, we’ve disabled the Browse with Bing beta feature out of an abundance of caution while we fix this in order to do right by content owners. We are working to bring the beta back as quickly as possible, and appreciate your understanding!

 

I looked it up (on Google of course) and it seems like this is one of Google's recruitment channels.

You get access to a terminal and a text editor:

Here are the commands you can execute:

You have a week to complete each challenge. I've done 2 of them so far, and requested the third one - they have been very enjoyable and I've already learnt a lot from them.

I'm pretty sure I have literally zero chance of being hired by Google (and I'm not even sure I would want to work for them even if they made the mistake of wanting to hire me), but this has been super interesting so far. And yeah, also a huge time waster, I've been thinking about making the solution to the third challenge more elegant and performant all day instead of doing my actual job.

 

Some interesting quotes:

Computers were very rigid and I grew up with a certain feeling about what computers can or cannot do. And I thought that artificial intelligence, when I heard about it, was a very fascinating goal, which is to make rigid systems act fluid. But to me, that was a very long, remote goal. It seemed infinitely far away. It felt as if artificial intelligence was the art of trying to make very rigid systems behave as if they were fluid. And I felt that would take enormous amounts of time. I felt it would be hundreds of years before anything even remotely like a human mind would be asymptotically approaching the level of the human mind, but from beneath.

But one thing that has completely surprised me is that these LLMs and other systems like them are all feed-forward. It's like the firing of the neurons is going only in one direction. And I would never have thought that deep thinking could come out of a network that only goes in one direction, out of firing neurons in only one direction. And that doesn't make sense to me, but that just shows that I'm naive.

It also makes me feel that maybe the human mind is not so mysterious and complex and impenetrably complex as I imagined it was when I was writing Gödel, Escher, Bach and writing I Am a Strange Loop. I felt at those times, quite a number of years ago, that as I say, we were very far away from reaching anything computational that could possibly rival us. It was getting more fluid, but I didn't think it was going to happen, you know, within a very short time.

 

Interesting discussion on HN.

view more: ‹ prev next ›