this post was submitted on 15 Sep 2024
133 points (100.0% liked)

Steam Deck

14908 readers
65 users here now

A place to discuss and support all things Steam Deck.

Replacement for r/steamdeck_linux.

As Lemmy doesn't have flairs yet, you can use these prefixes to indicate what type of post you have made, eg:
[Flair] My post title

The following is a list of suggested flairs:
[Discussion] - General discussion.
[Help] - A request for help or support.
[News] - News about the deck.
[PSA] - Sharing important information.
[Game] - News / info about a game on the deck.
[Update] - An update to a previous post.
[Meta] - Discussion about this community.

Some more Steam Deck specific flairs:
[Boot Screen] - Custom boot screens/videos.
[Selling] - If you are selling your deck.

These are not enforced, but they are encouraged.

Rules:

Link to our Matrix Space

founded 3 years ago
MODERATORS
 

He specifically cited bad battery life on the ROG Ally and Lenovo Go, saying that getting only one hour of battery life isn't enough. The Steam Deck (especially the OLED model) does a lot better battery wise, but improving power efficiency should really help with any games that are maxing out the Deck's power.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 8 points 2 months ago (6 children)

Here is my view and a small timeline:

  • FSR 1 (Jun 2021): Post processing. Can be used with any game, any graphics card on any system. Quality is not very good, but developers do not need to support it in order being usable.
  • FSR 2 (Mar 2022): Analytical and Game specific. Analyzes the content of the ingame in order to produce better output than FSR 1. Can be used only with games that have integrated support for. Still system and graphics card agnostic.
  • FSR 3 (Sep 2023): Improved version of FSR 2. Therefore the previous point applies here too, but has a bit more features and should produce better quality. It was late on arrival and was controversial at launch.
  • FSR 4 (maybe 2025): AI and hardware dependent. Not much is known, but we can expect that it requires some form of AI chip on the GPU. We don't know if it will be usable with other GPUs that have such a chip or is restricted to AMD cards. As this is analytical, it requires games to support this, therefore its Game specific as well. It's expected to have superior quality over FSR 3, maybe rivaling XESS or even DSR. But it seems the focus is on low powered weaker hardware, where it would benefit the most.
[–] [email protected] 2 points 2 months ago (3 children)

I am curious as to why they would offload any AI tasks to another chip? I just did a super quick search for upscaling models on GitHub (https://github.com/marcan/cl-waifu2x/tree/master/models) and they are tiny as far as AI models go.

Its the rendering bit that takes all the complex maths, and if that is reduced, that would leave plenty of room for running a baby AI. Granted, the method I linked to was only doing 29k pixels per second, but they said they weren't GPU optimized. (FSR4 is going to be fully GPU optimized, I am sure of it.)

If the rendered image is only 85% of a 4k image, that's ~1.2 million pixels that need to be computed and it still seems plausible to keep everything on the GPU.

With all of that blurted out, is FSR4 AI going to be offloaded to something else? It seems like there would be a significant technical challenges in creating another data bus that would also have to sync with memory and the GPU for offloading AI compute at speeds that didn't risk create additional lag. (I am just hypothesizing, btw.)

[–] [email protected] 4 points 2 months ago* (last edited 2 months ago)

The thing with “AI” or better still, ML cores, is that they’re very specialized. Apple hasn’t been slapping ML cores in all of their cpus since the iPhone 8 because they are super powerful, it’s because they can do some things (that the hardware would have no problem doing anyway) by sipping power. You don’t have to think about AI as in the requirements for huge LLM like ChatGPT that require data centers, think about it like a hardware video decoder: This thing could play easily 1080p video! Or, going with raw cpu power rather than hardware decoding, 480p. It’s why you can watch hours of videos on your phone, but try doing anything that hits the cpu and the battery melts.

Edit: my example has been bothering me for days now. I want to clarify to avoid any possible misunderstanding that hardware video decoding has nothing to do with AI, it’s just another very specialized chip.

load more comments (2 replies)
load more comments (4 replies)