this post was submitted on 16 Feb 2024
184 points (96.0% liked)

Privacy

31997 readers
940 users here now

A place to discuss privacy and freedom in the digital world.

Privacy has become a very important issue in modern society, with companies and governments constantly abusing their power, more and more people are waking up to the importance of digital privacy.

In this community everyone is welcome to post links and discuss topics related to privacy.

Some Rules

Related communities

Chat rooms

much thanks to @gary_host_laptop for the logo design :)

founded 5 years ago
MODERATORS
 

As a medical doctor I extensively use digital voice recorders to document my work. My secretary does the transcription. As a cost saving measure the process is soon intended to be replaced by AI-powered transcription, trained on each doctor's voice. As I understand it the model created is not being stored locally and I have no control over it what so ever.

I see many dangers as the data model is trained on biometric data and possibly could be used to recreate my voice. Of course I understand that there probably are other recordings on the Internet of me, enough to recreate my voice, but that's beside the point. Also the question is about educating them, not a legal one.

How do I present my case? I'm not willing to use a non local AI transcribing my voice. I don't want to be percieved as a paranoid nut case. Preferravly I want my bosses and collegues to understand the privacy concerns and dangers of using a "cloud sollution". Unfortunately thay are totally ignorant to the field of technology and the explanation/examples need to translate to the lay person.

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 49 points 9 months ago* (last edited 9 months ago) (2 children)

I don't where you live. But almost all of bigtech US cloud is problematic (Read: Illegal to use) for storing or processing of Personal information according to the GDPR if you're based in the EU. Don't know about HIPPA and other non-EU legislation. But almost all cloudservices use US bigtech as a subprocessor under the hood. Which means that the use of AI and cloud is most likely not GDPR-complaint. Which you could mention to the right people and hope they listen.

Edit: It's illegal to use for the processing of the patients PII, because of transfer to insecure third countries and because bigtech uses the data for their own purposes without any legal basis.

Edit 2: The same is the case with your, and your colleagues PII.

In my opinion privacy and GDPR is the same in this case. I think most public authorities is required to have a DPO, fx hospitals or the relevant health authority. The DPO can help answer your and your bosses questions on the mentioned questions.

Hope you figure it out.

[–] [email protected] 36 points 9 months ago

I agree and I suspect this planned system might get scuttled before release due to legal problems. That's why I framed it in a non legal way. I want my bosses to understand the privacy issue, both in this particular case but also in future cases.

[–] [email protected] 17 points 9 months ago* (last edited 9 months ago)

You don't have to use a cloud service to do AI transcription. You don't even need to use AI. Speech to text has been a thing for like 30+ years.

Also, AWS has a FedRAMP authorized Gov Cloud that's almost certainly HIPAA (and it's non-us counterparts) compliant.

Also also, there are plenty of cloud based services that are HIPAA compliant.

[–] [email protected] 43 points 9 months ago* (last edited 9 months ago)

Do your patients know that their information is being transcribed in the cloud, which means it could potentially be hacked, leaked, tracked, and sold? How does this foster a sense of distrust, and harm the patients progress?

Could you leverage this information and the possibility of being sued if information is leaked with the bureaucrats?

[–] [email protected] 20 points 9 months ago

Dunno, maybe collect the news of every private digital data leak in recent years and show how unsafe it really is?

[–] [email protected] 18 points 9 months ago* (last edited 9 months ago) (1 children)

You're going to lose this fight. Admin types don't understand technology and, at this point, I imagine neither do most doctors. You'll be a loud minority because your concerns aren't concrete enough and 'AI is so cool. I mean it's in the news!'

Maybe I'm wrong, but my organization just went full 'we don't understand AI so don't use it ever,' which is the other side of the same coin.

[–] [email protected] 11 points 9 months ago

I understand the fight will be hard and I'm not getting into it if I cant present something they will understand. I'm definetly in a minority both among the admin staff and my peers, the doctors. Most are totally ignorsnt to the privacy issue.

[–] [email protected] 13 points 9 months ago (2 children)

Shouldn't that be a HIPAA violation? Like you can't in good conscious guarantee that the patient data isn't being used for anything but the healthcare.

[–] [email protected] 16 points 9 months ago* (last edited 9 months ago) (1 children)

My question is not a legal one. There probably are legal obstacles for my hospital in this case but HIPAA is not applicable in my country.

I'd primarily like to get your opinions of how to effectively present my case for my bosses against using a non local model for this.

[–] [email protected] 8 points 9 months ago

Look to your local health privacy laws. Most countries have that tightly controlled in such a way that this use of AI is illegal.

Your question is not a legal one, but a legal argument can be a very persuasive one.

[–] [email protected] 4 points 9 months ago* (last edited 9 months ago) (5 children)

It is until they prove it isn't, which they might not be able to do. Many trusted 23andme only to see private data stolen. Make the company prove the security in place and the methods ensuring privacy, because you'll essentially be liable for any failures of the system from a lack of due diligence.

load more comments (5 replies)
[–] [email protected] 12 points 9 months ago* (last edited 9 months ago) (1 children)

It would be worth finding out more about how exactly the training process works, namely whether or not the AI company stores the training audio clips after training has been completed. If not, then I would say you don't have anything to worry about, because the model itself can't be used to clone your voice to any useful extent. Deep neural networks aren't reversible like that. Even if they were, it's not just trained on you, it's trained on hundreds of thousands of people then fine-tuned to you.

If they do store the clips though, then maybe show them this article about GitHub to prove to them that there is precedence for private companies using people's data to train AI without their explicit consent.

[–] [email protected] 3 points 9 months ago

To expound on this, AI models are extremely narrow in scope. One which reproduces audio it is trained on is entirely different from one that understands what is being said. As Mr. Turkalino mentioned, the transcription AIs are built on a combination of speech recognition and incredibly specialized text data that is narrowly defined by your industry (medical in this case). In fact, they may have tuned specific models for separate disciplines. This included thousands of documents ranging from textbooks to scholarly journals along with thousands of recordings of professionals saying the words in a variety of accents and dialects so it can understand the difference between very important and very different sounding words, my wife is pregnant, so amnioitis and amniocentesis come to mind. They are close enough sounding that a general model might mistake them, and that being transcribed wrong could spell real problems when others may look at the patients chart if there are complications.

Also, most models are run in the cloud because the calculations can he very taxing. I run Stable Diffusion and other AIs locally on my beast of a machine and it struggles at times. Realistically, the cloud machines are just bugger than you can get as a desktop. Also, under the most ideal circumstances, the audio of your notes does not live in the servers, it is transmitted, stored on a virtual machine (VM) while it is being processed, then after the results are completed the VM is destroyed and the audio recording goes with it. Nothing is kept. Of course, that is where you need to be sure to do the work, making sure that your situation is "ideal". One of the biggest controversies in with AI right now is that data is being stored for doing reinforcement training on the AI models. Example, you send your recordings and the AI returns the transcript. You mark any corrections and go on with your day. The company takes those recordings and feeds them back into the general model with the corrections you made and tries to tell the AI what it got wrong. You are going to want to be sure that you are allowed to opt-out of your data being allowed to be used as training data (beyond the fine-tuning to help it learn your voice).

[–] [email protected] 12 points 9 months ago

I would have work sign a legal discharge that from the moment I use the technology, none of the recordings or transcription of me can be used to incriminate me in case of an alleged malpractice.

In fact, since both are generated or can be generated in a way that both sounds very assertive but also can be adding incredibly wild mistakes, in a potentially life and death situation, they legally recognise potentially nullifying my work, and taking the entire legal responsibility for it.

As you can see in the most recent example involving Air Canada, a policy has been invented out of thin air. Such policy is costing the company. In the case of a doctor, if the administration of the wrong sedative, the wrong medication, or if the wrong diagnosis was communicated to the patient, etc; all that could have serious consequences.

All sounding (using your phrasings, etc) like you, being extremely assertive, etc.

A human doing that job will know not to derive from the recording. An AI? "antihistaminic" and "anti asthmatic" aren't too far off, and that is just one example off of the top of my head.

[–] [email protected] 12 points 9 months ago* (last edited 9 months ago) (1 children)

Will they allow you to use your own non-cloud solution? As long as you turn in text documents and they don't have to pay a person to transcribe, they should be happy. There are a number of speech to text apps you can run locally on a laptop, phone, or tablet.

But of course, it's sometimes about control and exercising their corporate authority over you. Bosses get off on that shit.

Not sure which type of doctor you are, but there's a general shortage of NPI people. I hope you can fight back with some leverage. Best of luck.

[–] [email protected] 11 points 9 months ago* (last edited 9 months ago) (2 children)

It will not be possible to use my own software. The computer environment is tightly controlled. If this is implemented my only input device to the medical records will be the AI transcriber (stupidity).

I'm a psychiatrist in the field of substance abuse and withdrawal. Sure there's a shortage of us too but I want the hospital to understand the problem, not just me getting to use a old school secretary by threatening going to another hospital.

[–] [email protected] 5 points 9 months ago (1 children)

I was afraid that might be the case. Was hoping they would let you upload the files as if you had typed them yourself.

Maybe find some studies / articles on transcription bots getting medical terminology and drug names wrong. I'm sure that happens. AI is getting scary-good, but it's far from perfect, and this is potentially a low-possibility-but-dangerous-consequences kind of scenario. Unfortunately the marketers of their software probably have canned responses to these types of concerns. Management is going to hear what they want to hear.

[–] [email protected] 4 points 9 months ago (1 children)

Thaks fot he advice but I'm not against using AI-models transcribing me, just not a cloud model specifically trained on my voice without any control by me. A local model or more preferrably a general local model woulf be fine. What makes me sad is that the persons behind this are totally ignorant to the problem.

[–] [email protected] 3 points 9 months ago (4 children)

I understand, and we're basically on the same page. I'm not fully anti-AI, either. Like any tool, it can be used for good or evil. And you are right to have concerns about data stored in the cloud. The tech bros will mock you for it and then.... oh look, another data breach has it been five minutes already. :)

load more comments (4 replies)
[–] [email protected] 3 points 9 months ago (1 children)

my only input device to the medical records will be the AI transcriber

I understand that you keep steering away from legal arguments, but that can't be legal either. How could a doctor not have direct, manual access to patient records?

Anyway, practical issues:

You need some way to manually interact with patient records in the inevitable event the AI transcription gets it wrong. It only takes one time messing up transcription on something critical and you have a fucking body on your hands. Is your hospital prepared to give patients the wrong dosages because background noise or someone else speaking makes the AI mishear? Who would be held responsible in the case of mistreatment due to mistranscription? Is your hospital willing to be one of the first to try and tackle that legal rats nest?

A secretary is able to do a sanity check that what they heard make sense. AI transcription will have no such logic behind it. It will turn what it thinks it heard into text and chuck it wherever it logs to. It thinks you've called for leeches when you said something about lesions? Have fun.

Whenever there's an issue with the transcription service you'd be screwed too. That could mean network outage, power outage, microphone breaks, any part of this equipment breaks, and this whole system falls apart.

[–] [email protected] 2 points 9 months ago

The problem with incorrect transceiption exists with my secretary too. In the system I work in the secretary write my recordibg, sends it to me, I read it. I can edit the text at this point and then digitally sign it with a personal private key. This usually happens at least a day after being recorded. All perscriptions or orders to my nurses are given inannother system besides the raw text in the medical records. I can't easily explain the practical workings but I really don't see that the AI system will introduce more errors.

But I agree that in the event of a system failure, there will be a catastrophic situation.

[–] [email protected] 9 points 9 months ago (1 children)

Stop using the digital voice recorder and type everything yourself. This is the best way to protect your voice print in this situation. It doesn't work well as a protest or to educate your colleagues, but I suppose that's one thing you can use your voice for. Since AI transcription is a cost saving measure, there will be nothing you can do to stop its use. No decision maker will choose the more expensive option with a higher error rate on morals alone.

[–] [email protected] 7 points 9 months ago (1 children)

Unfortunately the interface of the medical records system will be changed when this is implemented. The keyboard input method will be entirely removed.

[–] [email protected] 10 points 9 months ago (2 children)

Even if this gets implemented, I can't imagine it will last very long with something as completely ridiculous as removing the keyboard. One AI API outage and the entire office completely shuts down. Someone's head will roll when that inevitably happens.

load more comments (2 replies)
[–] MajorHavoc 9 points 9 months ago* (last edited 9 months ago) (1 children)

Your voice-print is worth protecting.

There's already retirement funds activating "my voice is my password" by default, now. (You can, and absolutely should opt-out, if yours does.) And you can't change your voice-print if it gets leaked. (Maybe with a professional voice coach, you could...)

Personally, I would change employers over this, if I had the option.

I think we're heading towards having a group of citizens with compromised voice-prints leaked to the dark web, who have a harder time day to day through no fault of their own. Like the early SSN breach sufferers, history tells us that society says "it's a shame", and tries to protect the next generation properly, but doesn't recompense those hurt by the early bullshit.

While job searching, I would also request an accomodation, and not use the voice system. It's much easier for the employer to retain a secretary for you, than to deal with the legal hassles that will come up if they try to fire you for not using their legal-gray-area solution.

Even granted the accommodation, I would be looking for my next job though.

[–] [email protected] 5 points 9 months ago

Most places use this sort of software (at least, larger companies). I have worked with doctors who refused to use it and instead developed templates for common items they copied + pasted into the MAR software / PACS, etc., and they just type what they need. That’s what they did before dictation software existed anyway. It’s not as efficient, but it’s basically the only way to avoid this.

[–] [email protected] 9 points 9 months ago

I would suggest that that first action item would be is to ask for (in writing) are 1) data protection and 2) privacy policies. I would then either pick it apart, or find someone who works in cybersecurity (or the right lawyer) to do that. I’ve done it a few times and talked my employer out of a few dodgy products, because the policies clearly try to absolve the vendor of any potential liability. Now, whether the policies truly limit liability would have to be tested in court.

You could also talk about how data protection, encryption, identity and access management, and governance is actually really expensive, but I’d first start poking holes in the actual policies to create doubt.

[–] [email protected] 8 points 9 months ago (1 children)

Okay, so two questions:

  1. are you in a country that falls under the GDPR?
  2. this training data is to be per person?
[–] [email protected] 13 points 9 months ago* (last edited 9 months ago) (1 children)

I work in Sweden and it falls under GDPR. There are probably are GDPR implications but as I wrote the question is not legal. I want my bosses to be aeare of the general issue ad this is but the first of many similar problems.

The training data is to be per person, resulting in a tailored model to every single doctor.

[–] [email protected] 5 points 9 months ago

I think you can use the gdpr for your advantage here. Someone has to have tried this, right? So they could put on a gdpr request, demanding all data stored from them.

[–] [email protected] 6 points 9 months ago

The personalized data model will be trained on your voice. That means that it's going to be trained on a great deal of patient medical history data (including PII). That means it's covered by HIPAA.

I strongly doubt the service in question meets even the most minimal of requirements.

[–] [email protected] 4 points 9 months ago (2 children)

I assume you'll be using Dragon Medical One. Nuance is a well established organization, with users in a broad range of professions, and their medical product is extensively used by many specialists. The health system where I live has been in the process of phasing out transcriptionists in favor of it for a decade or so.

The only potential privacy concerns a hospital would care about would be if they are storing your transcripts on their servers, because that will contain sensitive information about patients. It will be impossible to get any administrator to care about your voice data.

This tide is unlikely one you will be able to stem, but you could stop dictating and type it yourself.

load more comments (2 replies)
[–] [email protected] 3 points 9 months ago (1 children)

Personally I'd be more worried about leaking patient information to an uncontrolled system than having a voice model made

[–] [email protected] 3 points 9 months ago

Thats another issue and doesn't lessen the importance of this issue. Both are important but separate. One is about patiwnt data, the other about my voice model. Also in thsi case I have no control over the mesical records and it's already stored outside the hospital in my case.

[–] [email protected] 3 points 9 months ago (1 children)

So what's your concern? I'm a bit confused.

  1. Using cloud to process patient data? Or,
  2. Collecting your voice to train a model?
load more comments (1 replies)
[–] [email protected] 3 points 9 months ago

Ironically, GPT can kinda get you started here...

To present your case effectively to your bosses and colleagues, focus on simplifying the technical aspects and emphasizing the potential risks associated with using a cloud-based AI transcription service:

  1. Privacy Concerns: Explain that using a cloud-based solution means entrusting sensitive biometric data (your voice) to a third-party provider. Emphasize that this data could potentially be accessed or misused without your consent.

  2. Security Risks: Highlight the risks of data breaches and unauthorized access to your voice recordings stored in the cloud. Mention recent high-profile cases of data breaches to illustrate the potential consequences.

  3. Voice Cloning: Explain the concept of voice cloning and how AI algorithms can be trained to mimic your voice using the data stored in the cloud. Use simple examples or analogies to illustrate how this could be used for malicious purposes, such as impersonation or fraud.

  4. Lack of Control: Stress that you have no control over how your voice data is used or stored once it's uploaded to the cloud. Unlike a local solution where you have more oversight and control, a cloud-based service leaves you vulnerable to the policies and practices of the provider.

  5. Legal and Ethical Implications: While you acknowledge that there may be existing recordings of your voice online, emphasize that knowingly contributing to the creation of a database that could potentially be used for unethical or illegal purposes raises serious concerns about professional ethics and personal privacy.

  6. Alternative Solutions: Suggest alternative solutions that prioritize privacy and security, such as using local AI transcription software that does not upload data to the cloud or implementing stricter data protection policies within your organization.

By framing your concerns in terms of privacy, security, and ethical considerations, you can help your bosses and colleagues understand the potential risks associated with using a cloud-based AI transcription service without coming across as paranoid. Highlighting the importance of protecting sensitive data and maintaining control over personal information should resonate with individuals regardless of their level of technical expertise.

[–] [email protected] 2 points 9 months ago

You tell them they either have a local person transcribe or you will have no choice but to step down. Tell them that the cloud is no place for medical data. It would also be a bonus if you could a bunch of your coworkers on board.

[–] [email protected] 2 points 9 months ago

Unfortunately a guy I know works for a gov hospital and they've used such technology for over a decade at this point. It seems unavoidable.

[–] [email protected] 2 points 9 months ago (5 children)

Simple jobs are going to continue to go away in favor of more efficient spending.

You’re not going to get around the removal of simple jobs from the market in favor of newer concepts and more complex operations.

All these people that said going to college to further your education was stupid and a waste of money are going to be the first to bitch and moan because the rest of us who spent the time and money to better ourselves would like to reciprocate that same logic into the world so you don’t have to worry about things like underpaid fast food workers spitting in your food, delivery drivers stealing your food, etc.

Some people who can only do “simple” tasks are the ones who stand the most to be hurt by the world moving forward and becoming more advanced and complex, but I’m not sure what we can do to help them outside of seriously considering UBI. The wealth we are generating and saving through automation deserves to be equally spread amongst the people it replaced. That’s fair.

load more comments (5 replies)
load more comments
view more: next ›