this post was submitted on 19 Dec 2023
1597 points (98.0% liked)
memes
9806 readers
8 users here now
Community rules
1. Be civil
No trolling, bigotry or other insulting / annoying behaviour
2. No politics
This is non-politics community. For political memes please go to [email protected]
3. No recent reposts
Check for reposts when posting a meme, you can only repost after 1 month
4. No bots
No bots without the express approval of the mods or the admins
5. No Spam/Ads
No advertisements or spam. This is an instance rule and the only way to live.
Sister communities
- [email protected] : Star Trek memes, chat and shitposts
- [email protected] : Lemmy Shitposts, anything and everything goes.
- [email protected] : Linux themed memes
- [email protected] : for those who love comic stories.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Voice assistants are money losing products. If they can do something like processing the wakewords on the device before chosing to send to a server they will. These companies are far too stingy to continuously stream audio to their servers
Back in the day when everything had to be processed server-side sure.
Now we have purpose-built hardware helping work this shit out. The devices are basically capable of handling native language resolution locally. They're no longer need to farm the data out. I still don't think they're doing this we would see it in the open source operating systems, but if they wanted to any late model cell phone would be absolutely fine parsing out your interests from your conversations. Hell, I'm sure the contents of this dictation I'm making now are being reduced and added to my social graph at Google.
I think this should be fairly easy to test yourself. Just disconnect from the WAN, say the wake word, and see if the device responds.
He means internet, people. He means disconnect from the internet
Someone can correct me if I'm wrong but home assistant is currently struggling with this and is processing everything on your local box because it can't do wakewords on the device.
I think they're choosing to do it that way. Raspberry pi's easily have that capability to do the wake word recognition on device (i think they are also working on that). Esp's on the other hand, can only stream audio to the server and not much more. Since esp's are far cheaper than installing a raspberry in each room, they are focusing to do wake word detection on the server not on device.
Yeah what possible use could this company, whose business model relies on surveillance, have for surveiling you
Exactly. If it is practical and money can be made doing it, then continuous, ambient sound parsing will be the norm. Currently it seems like it’s not a valuable business. When it is valuable to them, they will add a checkbox somewhere in your account to disable it, and most people will not be bothered enough to look for it.
Are they though?
My experiences are much MUCH different. The amount of compute waste is through the roof, and we shrug at +$50k/m provisioning. You don't even need approvals for that, and you can leave it idle and you MIGHT get a ping from gloudgov after a few months.