Lemmy, I really would like to hear your opinions on this. I am bipolar. after almost a decade of being misdiagnosed and on medication that made my manic symptoms worse, I found stable employment with good insurance and have been able to find a good psychiatrist. I’ve been consistently medicated for the past 3 years, and this is the most stable I have been in my entire life.

The office has rolled out the use of an app called MYIO app. My knee jerk reaction was to not be happy about the app, but I managed my emotions, took a breath and vowed to give it a chance. After being sent the link to validate my account, the app would force restart my phone at the last step of activation. (I have my phone locked down pretty tight, and lots of google shit, and data sharing is disabled, so I’m thinking that might be the cause. My phone is also like 4-5 years old, so that could also be the cause.)

Luckily I was able to complete the steps on PC and activate that way. Once I was in the account there were standard forms to sign, like the HIPAA release. There was also a form there requesting I consent to the use of AI. Hell to the NO. That’s a no for me dawg.jpg.

I’m really emotional and not thinking rationally. I am hoping for the opinions of cooler heads.

If my doctor refuses to let me be a patient if I don’t consent to AI, what should I do? What would you do? Agree even though this is a major line in the sand for me, or consent to keep a provider I have a rapport with, who knows me well enough to know when my meds need adjusting?

EDIT: This is the text of the AI agreement. As part of their ongoing commitment to provide the best possible service, your provider has opted to use an artificial intelligence note-taking tool that assists in generating clinical documentation based on your sessions. This allows for more time and focus to be spent on our interactions instead of taking time to jot down notes or trying to remember all the important details. A temporary recording and transcript or summary of the conversation may be created and used to generate the clinical note for that session. Your provider then reviews the content of that note to ensure its accuracy and completeness. After the note has been created, the recording and transcript are automatically deleted.

This artificial intelligence tool prioritizes the privacy and confidentiality of your personal health information. Your session information is strictly used for the purpose of your ongoing medical care. Your information is subject to strict data privacy regulations and is always secured and encrypted. Stringent business associate agreements ensure data privacy and HIPAA compliance.

Edit 2: I just wanted to say that I appreciate everyone here that commented. For the most part everyone brought up valid points, and helped me see things I had not considered. I emailed my doctor and let them know I did not want to agree to the use of AI. I let them know that I was cool with transcription software being used as long as it was installed locally on their machines, but I did not want a third party online app having access to recorded sessions for the purposes of transcription. They didn’t take issue with it.

Thank you everyone!

  • GreenBeanMachine@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    10 hours ago
    1. If your options are having a doctor that uses AI or having no doctor at all. Some doctor is better than none.

    2. I would ask more information about what AI they are using, where the data is processed (locally or online), where and how the AI collected data is stored (locally or in the cloud), who can access your data and whether it could be used for some AI training.

  • Yes I would but only if I can be sure the LLM wasn’t listening. Collection of personal information requires consent in Canada and I wouldn’t be giving consent.

    I don’t believe for one second the conversations aren’t uploaded to a datacenter nor that those transcripts will be deleted.

  • Crankley@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    14 hours ago

    I have all sorts of anxiety surrounding AI. Most of the anxiety comes from the misuse, copyright issues and departure from critical and creative thinking. However, one of the fields I actually think it could be very useful and of great benefit is medicine.

    That being said, I’d be a no as well. The way this is worded and he track record we’ve seen with privacy doesn’t fill me with much confidence. Feels like another instance off loading of thinking rather as a tool for better diagnosis.

    It sounds like America from the process. The confluence of commercialization of healthcare and tools that can make it look like time and attention has been used leads to some bad places. I’d be very sceptical about any advice medical or otherwise I recieved.

    The unfortunate truth is that without these tools the cost of care will be higher for health companies not using the tools. Which means bespoke human led care will be a luxury in America in the near future. I don’t think it’s a reality you are going to be able to avoid.

    I would push back at every opportunity, double check all of the information you are getting, ask pointed “why this” questions, make doctors clearly communicate that they are the ones giving the recommendation. At the end of the day a good doctor with AI tools is likely to do a better job.

  • NotMyOldRedditName@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    14 hours ago

    For note taking only, id be fine IF it was all run locally with no ability to be trained on.

    Id want assurances from the Dr that they also carefully review the notes immediately after or that I get to see the notes before leaving due to the risk of hallucinations that could cause future care problems.

    They could have it visible on a screen while youre in the room with you to help you be sure its accurate.

    Edit: id care less about it being local if it wasn’t medical/legal in nature.

  • Tollana1234567@lemmy.today
    link
    fedilink
    arrow-up
    3
    ·
    14 hours ago

    i wonder if they hallucinate notes post-appointment, i notice that there have been complaints against certain providers that the “doctors” did other examinations that they dint do in-person and it appeared on their records.

  • michaelmrose@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    16 hours ago

    AI summaries often make up details, omit what is important, and get stuff wrong. Every error may follow you forever complicating diagnosis and treatment and ultimately can harm or kill you.

  • ipkpjersi@lemmy.ml
    link
    fedilink
    arrow-up
    3
    ·
    16 hours ago

    Yes, because there aren’t really going to be any more doctors that aren’t using AI. AI is widespread and unavoidable. It’s definitely unfortunate but it’s true.

    I have literally seen walk-in doctors have ChatGPT on their screen, they are literally using it to look up symptoms too.

  • Vex_Detrause@lemmy.ca
    link
    fedilink
    arrow-up
    5
    ·
    18 hours ago

    One of our doctors started using AI transcription and summary. I find it lacking substance after AI is done summarizing. You can see her thought process when she type her notes, it’s thorough but concise. The AI summary is definitely short, but it’s not about shortening, it’s about handing your note to another doctor and that doctor is able to follow through with the plan.

  • sudoer777@lemmy.ml
    link
    fedilink
    English
    arrow-up
    2
    ·
    15 hours ago

    Imagine charging $300 then outsourcing your note taking to a machine that barely knows shit and has nothing to lose

  • swelter_spark@reddthat.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    21 hours ago

    My dentist started using this. Not the same service, but automatic AI-based visit recording, transcription & record-keeping. She’s experienced with a difficult issue I have, so I’m staying with her for now but this is definitely a minus.

  • Royy@lemmy.world
    link
    fedilink
    arrow-up
    8
    ·
    1 day ago

    Hello, It us absolutely justified to be worried, tell your doctor you concerns, and ask your doctor questions about the use of AI. If you want some help putting together questions for your doctor lmk.

    I’m involved with the development / integration of AI. From the specific text of the AI agreement, it looks like these are the AI tools you’re consenting to:

    • Transcription tool: This is a speech-to-text tool. It can differentiate between speakers.

    • Transcript -> clinical documentation tool. This takes the text of the transcript, interprets it, and generates clinical documentation based on it.

    It does not seem like, as part of the agreement, it covers taking the clinical documentation and attempting to suggest diagnosis or care steps.

    I am actually concerned by the “recording and transcript are automatically deleted” line. If your doctor reviews the generated clinical documentation vs the transcript, and misses something for whatever reason, if they are unsure about something in the future they can’t go back and reference the original audio / generated transcript to verify accuracy?

    There are also concerns about how they are following HIPAA laws:

    What model / service are they using?

    Did they do their due diligence in deciding what service to use?

    Have they looked at other cases where data companies have said they don’t persist/ sell your data and then they sold it / there was a breach of data that shouldn’t have persisted in the first place?

    Do they anonymize personal information before they send it to whatever service they are using? -Note that this is not possible for transcription models, as they cannot know what text to anonymize/censor until the model generates the text. That doesn’t mean there are not HIPAA-compliant text transcription models, text transcription models can even be run locally on maybe consumer-grade devices, meaning the audio doesn’t have to be sent to a 3rd party.

  • VampirePenguin@lemmy.world
    link
    fedilink
    arrow-up
    9
    arrow-down
    1
    ·
    1 day ago

    AI and the people pushing it are not trustworthy. They do not have your data security nor your wellbeing at heart, even if your doctor does. LLMs are inherently bad at data security and there is no way these companies can, in good faith, promise HIPPA compliance. Likely, the AI use will be on the part of the insurance company to find ways of denying your claims.

  • GrayBackgroundMusic@lemmy.zip
    link
    fedilink
    arrow-up
    5
    ·
    1 day ago

    Your provider then reviews the content of that note to ensure its accuracy and completeness.

    You know they’re not gonna do that, in practice.

  • leadore@lemmy.world
    link
    fedilink
    arrow-up
    22
    ·
    2 days ago

    I feel very strongly about this and I would change doctors. But of course it won’t be long before they all do this and we’ll have no alternative. The two biggest problems I see are

    1. I saw a news story where a doctor who uses this said it saves her time because before seeing the patient she gets an AI summary of their chart, so she doesn’t have to “go through several tabs” to read the actual information. Oh great, let the statistical probability text generator hallucinate up some shit about what’s in a person’s chart, to save 10 seconds of tab-clicking to read the ACTUAL patient records! If they want a summary there’s no reason a traditional report or summary screen couldn’t be programmed to pull data out of the most important fields and arranging them in the desired format.

    2. THEN the doctor uses her damn phone to record your visit, everything you say, and that gets run through the AI which generates a visit summary and puts that into your medical records. So, god only knows what 3rd party private corporate vulture has access to your doctor/patient conversations and what they’ll do with them, and again, what hallucinated shit will get put into your medical records!

    So your doctor never reads your chart and never writes your chart! [Readacted] me now! Also what happens after a few iterations of an AI summarizing records that an AI wrote?

    • sem@piefed.blahaj.zone
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 day ago

      If you buy into the story that “someday they’ll all be using it” you are doing the AI boosters’ job for them. It is not a foregone conclusion, and there is no reason to accept that future.

      • leadore@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        1 day ago

        I hope you’re right! The magical thinking and child-like trust in this tech by otherwise intelligent people is scary though.

    • Cellari@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      2 days ago

      AI is really good at concepts, not logic. But even then the performance is going to be dependant of the data it was modelled with.

      You can ask for a specific symptom of pneumonia and it can answer. You can also ask for a summary of pneumonia, as someone has most likely wrote one already and AI understand to use it because of the concept relevance. But if you ask it to summarize a patient information, it will split the patient information into blocks it can summarise based on what summarisation information it has in the model data. I can assure you it cannot ever have all the possibilities pretrained already.

      • leadore@lemmy.world
        link
        fedilink
        arrow-up
        6
        ·
        1 day ago

        My fear is that the models merge all kind of patient record info together as the statistical model so the ‘summaries’ will write the most likely word to come next in the phrase, so wrong information and incorrect diagnoses will be recorded into a person’s record, or that important information will be omitted.

        I predict that people will be harmed or die because of missing or false information patient records. But it will be difficult for the public to find out about it because of privacy issues and the unwillingness of institutions to acknowledge it.

        Drugs have to go through multiple stages of testing and trials before they’re allowed to be used on patients. But no one is doing any kind of testing on the effects of this at all, let alone controlled trial rollouts with review, before allowing general use.