sisyphean

joined 1 year ago
MODERATOR OF
[–] sisyphean 3 points 1 year ago

It doesn't work yet, the screenshots are from a private test instance.

[–] sisyphean 4 points 1 year ago* (last edited 1 year ago) (4 children)

It is definitely possible, at least for videos that have a transcript. There are tools to download the transcript which can be fed into an LLM to be summarized.

I tried it here with excellent results: https://programming.dev/post/158037 - see the post description!

See also the conversation: https://chat.openai.com/share/b7d6ac4f-0756-4944-802e-7c63fbd7493f

I used GPT-4 for this post, which is miles ahead of GPT-3.5, but it would be prohibitively expensive (for me) to use it for a publicly available bot. I also asked it to generate a longer summary with subheadings instead of a TLDR.

The real question is if it is legal to programmatically download video transcripts this way. But theoretically it is entire possible, even easy.

[–] sisyphean 4 points 1 year ago (5 children)

It does unfortunately, see here:

https://openai.com/pricing

I limited it to 100 summaries / day, which adds up to about $20 (USD) per month if the input is 3000 tokens long and the answer is 1000.

Using it for personal things (I buildt a personal assistant chatbot for myself) is very cheap. But if you use it in anything public, it can get expensive quickly.

[–] sisyphean 3 points 1 year ago* (last edited 1 year ago) (2 children)

It doesn't work yet, the screenshots are from a test Lemmy instance

[–] sisyphean 12 points 1 year ago* (last edited 1 year ago)

I’m glad that this post has reached 7 upvotes. I’m looking forward to participating in the this_is_an_example community!

[–] sisyphean 5 points 1 year ago* (last edited 1 year ago) (2 children)

Or when it tells you that it can do something it actually can't, and it hallucinates like crazy. In the early days of ChatGPT I asked it to summarize an article at a link, and it gave me a very believable but completely false summary based on the words in the URL.

This was the first time I saw wild hallucination. It was astounding.

[–] sisyphean 4 points 1 year ago

Your job is to do your tasks in the most efficient way possible. You actually harm the company by doing unnecessary busywork instead of using the best tools available.

[–] sisyphean 1 points 1 year ago (2 children)

That’s a problem for sure. And if someone has a display name, someone else can create a user with the same avatar and display name on another instance, and pretend to be them.

[–] sisyphean 2 points 1 year ago

“Only” 58, all of them topics of hyperfocus forgotten after a couple of weeks.

[–] sisyphean 2 points 1 year ago (1 children)

I feel like most of what he talks about is common knowledge now.

You would be surprised how uncommon this knowledge is, and how many developers I introduced to domain modeling by sending them this video :)

What we do requires continuous attention to detail. We sometimes get tired or lose focus. And that may result in poor quality code.

This is definitely true. I think maintaining and adding features to existing software is a lot like gardening. There are always tiny chores to do, you need to be constantly reorganizing small parts of the garden, there are always new opportunities for small improvements, and if you neglect doing them for a while, the problems add up, and the entire thing ends up looking messy and terrible to work with.

[–] sisyphean 3 points 1 year ago

Thank you, it's really cool, I like it!

 

OpenAI announced these API updates 3 days ago:

  • new function calling capability in the Chat Completions API
  • updated and more steerable versions of gpt-4 and gpt-3.5-turbo
  • new 16k context version of gpt-3.5-turbo (vs the standard 4k version)
  • 75% cost reduction on our state-of-the-art embeddings model
  • 25% cost reduction on input tokens for gpt-3.5-turbo
  • announcing the deprecation timeline for the gpt-3.5-turbo-0301 and gpt-4-0314 models
5
Unspeakable tokens (www.lesswrong.com)
submitted 1 year ago by sisyphean to c/auai
 

A deep dive into the inner workings of ChatGPT, and why it stops responding or replies weird or creepy things to seemingly simple requests.

 

Prompt injection is a serious and currently unresolved security vulnerability in tool-using LLM systems. This article convinced me that this is indeed a serious issue that needs to be addressed before letting an LLM loose on your emails, calendar or file system.

 

An excellent video series by Andrej Karpathy (founding member of OpenAI, then head of AI at Tesla). He teaches how GPTs work from the ground up, using Python. I learned a lot from this course.

 

This is an older article of mine I wrote when C# was still my main language.

I don’t know if posting my own content is allowed here - if not, feel free to remove it, no hard feelings.

122
Programming and Humility (self.programming)
submitted 1 year ago* (last edited 1 year ago) by sisyphean to c/programming
 

This is something I’ve been wondering about for a long time. Programming is an activity that makes you face your own fallibility all the time. You write some code, compile it or run it, and then 80% of the time, it doesn’t work exactly the way you imagined. There’s an error message, or it just behaves incorrectly. Then you need to iterate on it and fix the issues until you get the desired result, and even then it’s subtly wrong, and causes an outage at 3am on Sunday.

I thought this experience would teach programmers to be the humblest people in the world.

I can’t believe how wrong I was. Programmers can be the most arrogant dickheads you will ever meet. Why is that?

 

While not strictly related to programming, this is very surprising and harmful behavior that demonstrates how important thinking about edge cases is.

view more: ‹ prev next ›