false
Catalog
AI in Sleep Medicine: Navigating Security & Privac ...
AI in Sleep Medicine: Navigating Security & Privac ...
AI in Sleep Medicine: Navigating Security & Privacy Concerns Recorded Webcast
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Welcome to today's educational event, AI and Sleep Medicine, Navigating Security and Privacy Concerns, co-sponsored by the American Academy of Sleep Medicine and the Artificial Intelligence in Sleep Medicine Committee. My name is Matt Anastasi, and I'll be your moderator for this webinar panel. I've been a registered tech for over two decades and work as the ASM Sleep Scoring Project Manager and staff liaison for the AI Committee. Today, we'll be tackling a topic that often gets lost in the hype of AI, as we marvel at the potential for the technology. And that is, how should we as clinicians navigate privacy and security concerns within our sleep disorder centers and on behalf of our patients? It's a big topic, but we're excited to have an esteemed panel to discuss strategies and provide their take on these concerns from a legal, clinical, and data science perspective. While regulations around artificial intelligence, security, and privacy are constantly changing, our panelists will do their best to provide you with the most up-to-date insight. Please keep in mind that their opinions do not necessarily reflect the positions of the ASM, and their input is intended for educational purposes only. While the information in this webinar will address legal issues, it's not intended to be legal advice or a substitute for the advice of your own counsel. Anyone seeking specific legal advice or assistance should retain an attorney. So today's webinar is a follow-up to the Q&A panels of our previous presentations from the AI Committee, AI 101, Terminology, and AI 102, AI Beyond the PSG. And both webinars can be found on the aasm.org website under the Clinical Resources header. Just go to the AI dropdown selected, highlighted on this screen. Before I introduce our panelists, I'd like to remind you that your audio is muted and your video and chat are both off. We encourage you to include any questions as we go along in the Q&A section of the Zoom platform at the bottom, should be at the bottom for most people. If we do not have time to answer every question in this hour, the remaining questions in the Q&A will be shared with panelists through use of a Zoom Q&A report for them to follow up with you after our presentation. Also, you're free to reach out to us at contact at aasm.org or call the number on the screen. Now, allow me to begin by introducing our panelists for today. Dr. Mark Allison earned his PhD in computer science from Florida International University, specializing in software engineering and artificial intelligence. He completed his sabbatical year at Carnegie Mellon's Robotic Institute conducting research in human-AI interaction. His research interests spans model-driven software engineering, reinforcement machine learning and autonomous robotic systems with focused work in human-AI teaming. This work has been supported by competitive awards from the National Science Foundation and Google. Dr. Ramesh Sashteva is a practicing sleep and pediatric critical care physician who has served in the national role as medical director for quality initiatives for the American Academy of Pediatrics. He obtained J.D. cum laude from Marquette University Law School and has completed the artificial intelligence and healthcare certificate program from MIT Sloan School of Management. He has served as faculty in the Center for Ethics, Medicine and Public Policy at Baylor College of Medicine and as adjunct faculty in law at Marquette University Law School. Dr. Sashteva is currently a member of the AASM's AI committee. Last but not least, Mr. Brian Balow practices commercial and business law with a heavy focus on technology and intellectual property. Since the early 2000s, he has undertaken significant work in privacy, data security and cyber law cybersecurity with substantial focus in the areas of health information technology and telemedicine. Mr. Balow is a cum laude graduate of the University of Georgia School of Law and spent nearly 25 years practicing with large national law firms. So let's get to the questions for our panelists. Dr. Allison, your machine learning expertise has allowed you to build generative models using generative adversarial networks, which are similar to the large language models like CHAT-GPT that we're familiar with and are capable of patient data interpretation and analysis summary. So what are you seeing in response to these trained systems? And Mark, you're muted, I'm sure. Do you- My apologies, my apologies. As I was saying, great question. So in a sentence, I would say that there's too much trust in this black box, right? And there is an issue in terms of, you know, I would be a typical developer for these AI systems, but I lack the domain expertise. So there are three issues that arise. You know, these models are large, right? They are new, right? So they're still learning. And they, for some reason, are very, very popular. ChatGPT, for example, has exceeded 1 million users within a week of release. Well, kind of related to that, a viewpoint article recently published in JAMA details the need for FDA to provide oversight in generative EHR or EMR data summarization due to the possibility of bias using these LLMs. So what are your thoughts on this in relation to AI security and privacy in both your research and in others' clinical practice? Yeah, great question again. All right, so these large language models, they learn from existing data. So they have the ability to continue or amplify biases that are existing. So again, there needs to be a lot of collaboration between the developers of these systems and practitioners, right? Yeah, and these biases are based on demographic, socioeconomic factors, and historical disparities. Thank you. Yeah, we have one more and then we'll open it up. We'll look to see if we have any early Q&A questions to pose to you or to the panel. So the last one, a number of patients are opting to use FDA-cleared AI technologies in sleep medicine outside of clinics to monitor their sleep-wake cycles and other things for conditions related to the sleep disorders. It is wearable Wednesday after all. So some of these are independent of clinicians and some of these are being used in concert with physicians. So what do you see as some of the security and ethical concerns that may arise and that clinicians should be aware of? Okay, again, great questions. So firstly, I want the audience to differentiate between the large language models in terms of generative AI and smaller AI models that can be in close proximity to the practitioner. So the large language models, they exist in the cloud. So there are concerns, security concerns, of course, for the transmission to be intercepted. They are maintained by people that are outside of the organization. So that's also a threat. So, but we have to look at it that generative AI as having had, they have tremendous potential, right, to add to the practice, but they cannot be so trusted, right? They cannot be the decision-makers, in other words. Well, thank you for your insights in these questions. So this last one, when you say low-key, local models, are you talking about non-cloud? So like an edge AI or something that's an AI program that would be on a smartphone or something that- Exactly, something that would be in the organization, not remote, not outside of the organization. That way, the IT infrastructure will be able to protect the system, but it will not be as powerful as the larger systems because those systems cost hundreds of millions of dollars. It's unlikely that they would be contained within an organization. Interesting. Well, thank you again. And so, Kristen, do we have any additional questions on these topics that Dr. Ellison or anyone on our panel can address? We do. I'm gonna read the first question that came in. Thanks for the insight, Dr. Ellison. Can you give examples of generative AI helping build human-AI interactions? Okay, great. So as the name alludes, right, it's called generating, it's synthesizing data. So one of the projects that we're working on is to balance the data. So a lot of the clinical data that we come across is usually highly skewed to one demographic, but generative AI will be able to replicate, right, replicate the underlying features of the minority class data so that the entire system will be better at predicting the behaviors or be able to even help in the diagnosis of minority or underrepresented patients. Okay. Well, we're gonna hold, we have another question that I'm gonna circle back to a little bit later, but I wanna move on to our next speaker. So thank you again. Now, as we turn from data science to looking at this topic through the lens of a clinician and attorney, I wanna propose three scenarios that clinicians are either considering implementing or already seeing come through their modern sleep disorder centers. AI autoscoring of PSGs, AI transcription of patient encounters to the EMR, and AI analysis of wearable data. So with respect to these clinical scenarios from the previous slide and other clinical situations arising with the growing use of AI in sleep medicine, Dr. Sachdeva, I think many in our audience would be interested in your views. First on, what are some of the ethical issues emerging from the adoption of AI by clinicians? Yeah, thank you for that question, Matt. And this entire area is evolving. The field is growing just as the AI field is growing, and so are the ethical and other issues that are emerging. Now, I typically like to think of the ethical issues in three broad categories. And I think they each relates to the clinical scenarios you posed. So the first one, autoscoring and body somnography, and a big ethical issue is transparency. A lot of the AI algorithms, as you know, are what's in the so-called black box. Some of the algorithms are so deep in the deep learning domain that even with some sophisticated software programmers, it could be hard to identify what the specific connective points are. The second area of ethical issues relates to this entire area of privacy, informed consent, and security. And that really goes to your second scenario of using AI for electronic health records and in that setting. And then the third ethical issue, which ties to your scenario of wearable devices, is in this entire notion of what's called as algorithmic bias or algorithmic fairness. And Mark alluded to that. What that means really is that AI algorithms are developed using large data sets, which are called training data sets. But the training data set may not be clearly linked to the patient population where the actual algorithm is applied. So the results could be erroneous. So three big buckets, transparency, privacy, informed consent, security, and algorithmic bias. Great, that's very clear. Thank you. So the second area is what are some areas of legal liability related to AI impacting clinicians and sleep centers as you see it? Yeah, great. And again, Matt, this stems really from the three ethical issue groups that I tried to identify. So within the transparency domain, again, going back to something, a clinical scenario like polysomnography and the automated scoring, I think it puts a clinician or a sleep physician in a kind of interesting scenario, right? Because on one hand, there are some recommendations or analysis that the auto-scoring or the AI process has provided us. On the other hand, there could be other areas which the AI algorithm has not provided us. So what does the clinician do here? Do we just accept the AI recommendation or analysis? Do we modify it? So how do we address that and put this clinician in a unique scenario, even beyond polysomnography, because there could be situations where we reject the AI analysis. Is that somehow putting us at legal risk? Or on the other hand, we may accept it without thinking through it and simply depend upon the AI analysis. Does that put us at legal risk? So that's the balance that the clinician has to undertake. In the AI node scenario of the EHR, the entire area of privacy and security informed consent, this is a huge area. And I know Brian is an expert at this. I'm gonna leave it to him to address this. But just as an initial thought, the interesting part about AI is that the AI data that initially that we use could get further refined. That's the strength of AI, right? It builds on itself. So the final output may be using data that really was not there in the initial output. So even as a patient, if I give informed consent for using the data initially, have I really given informed consent for secondary uses that may subsequently arrive? That's the unknown question. And in the third scenario of wearable devices and the impact of algorithmic fairness, again, it's the same issue. Are we sure that these devices are providing us the right information? And this goes beyond devices. This goes to even medical technology we use in hospitals and sleep centers routinely. For example, the best illustration I can provide is the use of pulse oximetry. We're all familiar with pulse oxes to measure oxygen saturation, used frequently in sleep centers and in hospitals. And doctors' offices. Through the COVID pandemic, we have learned, and there have been several very interesting studies published in lead medical journals, that pulse oximeters may actually overestimate. It may overestimate the oxygen saturation in minorities, which means we could miss hypoxemia. Now, a pulse oximeter is not an AI module. I understand that. But the same concept applies. If you're using information from wearable devices or other sorts of AI products, AI-related products, are we sure that the results are applicable to our patient population? That's the real, I think, question that emerges from a legal standpoint. And so in that area, when it comes to autoscoring, EMR, wearables, that if you take it a logical step forward, what would you say about the physician's role in over-reading that data? And before coming to a conclusion, how involved they should be with that? Yeah, great question. And I'll try to answer in two parts, both as it relates to the autoscoring and the PSG, but then even beyond. So as it relates to the autoscoring, I think the key takeaway point is that even if the AI-generated algorithm helps us with the autoscoring, we can certainly take that into consideration, but we still need to ultimately look epoch by epoch every screen to make sure that the scoring is done correctly. So ultimately, the sleep physician is responsible for the ultimate sleep report. So the takeaway point is that we cannot just blindly depend upon an AI algorithm result. We can take that into account, but we just can't simply depend upon it. We still have to use a clinical judgment and look at each epoch of the entire sleep study before coming to a conclusion. But going beyond, Matt, the point I wanted to make is, and I'd be curious if Brian has any comments on this, but this entire area legally is evolving of this transparency notion. And it's very fascinating that, and I expect that in the years ahead, and this is beyond PSE autoscoring, obviously, in the years ahead, as we use AI in our clinical practices in a variety of scenarios, what's the medical liability risks incurred by the clinician, the physician? Typically, we are familiar with medical negligence, the so-called torts, but does that standard really apply in the AI setting? Is it a standard of product liability? Is it simple negligence? What's the physician's liability and or versus the institutional liability that purchased the AI algorithm? Is there vicarious liability? So there's a lot of legal analysis involved in this, new cases emerging. This is an evolving field. I don't have the answers, but these are some of the considerations I think the clinician needs to keep in mind and make sure that we talk to the right experts in addressing these if situations arise. With respect to the second clinical scenario related to privacy and data security informed consent for electronic health records and AI, I think Brian is going to be addressing that in more detail with some suggestion, what sleep practitioners can take away, so I won't dwell on that. But the last point I want to make is in terms of the wearable devices and what that means in terms of this algorithmic bias of fairness, I think the key takeaway for me as a clinician is that we need to be aware. We need to build up awareness that these things exist. So for example, using the pulse oximeter as an example again, a few years ago before the COVID pandemic, we didn't think of it. We just assumed a pulse oximeter is good for every population to the most extent. Now, more and more clinicians are aware of this, that there could be differences in erroneous readings based upon minority populations. So this awareness building, keeping up with the medical literature becomes important. I'm sure the AASM has and will continue to have additional resources to provide clinicians. Webinars such as this, this will also improve awareness. And the final point I would just, you know, I want the audience to think about is that AI seems complex, seems overwhelming, but it's important for us as we take care of our patients that we don't shy away from it. It can be a powerful resource, but we also must recognize the limitations and then make informed decisions and judgments. And I know maybe Mark, if you want to comment more, I was intrigued with your remark of how generative AI could overcome some of the algorithmic biases that may emerge from the AI. And I'm really curious if you want to comment on this for a sleep physician or a sleep clinician, how can they leverage that? How can they leverage generative AI to overcome some of the algorithmic biases through wearable devices and other such modalities? Mark. Oh, great question again. Yeah, so when a private physician data gets run through these models, right? The AI can strip everything that would identify a patient. And what happens is you get this model that has no patient information, but it has the features. It has the essence of the data without the actual data being imprinted on it, right? So, and just one more thing that I'd like to add is that tech GPT, these large language models, they're prone to hallucinations. They're designed to be authoritative, right? So when you have an authoritative system, you have to take it with a grain of salt, and that's what it has. So Mark, I think what you're saying is for our audience and our viewers is that just don't blindly depend upon chat GPT or the like, so leverage its benefit. Make it easier for us in many ways, but give some thought to it before acting upon it. I think that's what you're saying. Yes, and the system is made to learn, and it's learning from your data. So be very, very careful how you interact with the system. There's a few questions. One of them is right in line with maybe turning it on its head, what Ramesh, you were saying about, overscoring, reviewing, but the second question from the top from Dr. Bae, Kristen, could you take a look at that one at 1220? Yeah, right. Yeah, sure. If AI auto-scoring algorithms improve to the point where accuracy is high, could there be a time where clinicians are at risk if they do not use AI auto-scoring algorithms and only rely on manual scoring? Great, great question, and I think that's what I was trying to allude to earlier, and this really goes even well beyond polysomnography because let's say in any clinical situation, an AI algorithm provides us a certain output. So it puts us in an odd situation, right? If we accept that but don't use our clinical judgment and there's a error of some sort, then where's the liability? Conversely, if we ignore that and use our own clinical judgment on your question, manual scoring, then are we liable because we are disregarding the AI algorithm. After the legal literature, there's some discussion going on of the comparison of this to clinical practice guidelines, the deviation from clinical practice guidelines or not. So this is uncharted territory. This is new. All I can say is just keep up to the new articles, new research coming up. I hope ASM will have other webinars, seminars in the future to update this, but this is the evolving process. I mean, we don't know where the quotes will go in the future, where the liability risks will be. But Matt, maybe I can put this question back to you. I know you are working through the ASM on a lot of the, you know, with the different vendors and the entire auto-scoring process. I don't know if you want to make a few remarks for the audience. Yeah, the ASM has started to look at the performance, not necessarily the privacy or security, but the performance only of one subset of AI, which is the auto-scoring for PSG. So we do have a pilot, an auto-scoring certification pilot that looks at the stage scoring for these software algorithms. We launched the program in February of last year. We have a number of vendors that are in the final application stages. So we should be able to have some announcements coming soon, definitely before sleep in June. It's not an evaluation of the security and privacy, but it does go beyond the bar that we've used that the FDA sets because we're actually performing an independent performance evaluation of the software. So the software is installed locally on an ASM server. It's compared to manual scores, and then it needs to achieve a cut point that is set by an experience score. So once it reaches that level, we issue a certificate. They're included in a list, and they can promote it as such, and we can as well. So there is, we're starting with the things that our expertise can handle, which is the performance evaluation, security, and privacy. Those are the things that clinicians and I think institutions can engage their legal counsel and their IT security teams to actually work with on a one-to-one cases as you've brought up, so. But Matt, if I may just add before we move on, I think that, again, I know I've mentioned this twice, but the Pulse Oximate is a great example where clinically, many years ago, we believed a Pulse Ox works universally the same way across everybody. And we took the value at face value, right? Now, through the COVID pandemic, what we have learned is that that may not be the case. And the Pulse Ox may actually overestimate the oxygen levels in certain minority groups. So we could miss hypoxemia. So therefore, what's the takeaway for the clinician? That yes, use the Pulse Ox, but also use a clinical judgment. Use other clinical factors so we can get a full picture. Right. Excellent, so. May I add that the, you know, I have high confidence that these systems are getting, are being improved by leaps and bounds year over year. So, you know, the, inevitably, it will be ubiquitous throughout society. And also, it will, its level of accuracy will improve tremendously. Excellent. Okay, let me move on to Mr. Bailo. And so thank you so much for the participation so far. So Mr. Bailo, we'll go through some questions, open it up, and then we'll open up a freestyle panel discussion based on the remaining questions. We still have a few questions already for, from our attendees, so. So now, Mr. Bailo has more of a, I see it as a bird's eye view of everything that's going on. From his legal perspective, afforded by having a couple of decades experience as an attorney in health information technology. And the first question is, that you could speak to, is what are the legal distinctions between privacy and data security? And how are these distinctions important in the use of AI? Thank you, Matt. Great to be here. My head is already spinning from the conversation between Ramesh and Mark, which is good. So I think in terms of boxes, when I talk about privacy versus data security, and for purposes of this discussion, I'm going to leverage HIPAA, because that's the law and the regulations that are most prominent, at least in the healthcare space. So privacy to me has always been about, what information, whose information, and the purpose for using that information. And the privacy rule under HIPAA deals with those questions and those issues. And then the security rule under HIPAA talks about how you protect that information. So there really are two separate areas of, if you want to call it burden, regulatory burden on covered entities and business associates when they're looking at using protected health information or electronic protected health information with an AI platform. Now, whether that's baked into their EHR or the EMR, which it likely is, those are the questions and the distinctions, and that's the analysis that really needs to be undertaken by those covered entities and those business associates when they're making those decisions, which really bleeds into the next question at this point. So if you want to go ahead and read that, and I'll respond to that. So the next one is, what are the legal and regulatory implications of using generative AI in healthcare for existing laws and regulations, primarily HIPAA? Right. So one thing I've learned over the years is to try to keep things as simple as possible and not make things mysterious or overly challenging when addressing them and analyzing them as a lawyer. And when the generative AI came out and there started to be the discussions about, well, this is going to end up being used in healthcare, what are the immediate, at least my immediate thought was, well, we're going to have to adopt a whole bunch of new laws and regulations to address this, but pump the brakes a little bit and take a look at HIPAA and FTC, but primarily look at HIPAA and see how this might work under the existing framework. And my conclusion, and I don't think it's just me alone, my conclusion is that there are already plenty of regulations in place that can be utilized in assessing the risk to an organization of electing to use these tools. And so going through a standard sort of HIPAA analysis in terms of what's the intended use or the purpose for using the EPHI. And if it falls within that treatment payment healthcare where it's not required that you get separate informed consent from the patient, then you can continue within that HIPAA analysis where you don't have to get the separate approval for doing that and decide, okay, am I going to stay within that box when I use this for purposes of the privacy rule? And if so, then maybe I don't need to do anything further than what's already in place in terms of my privacy compliance. Now, one thing that I have thought about, and I'm not sure where this is headed, but notice of privacy practices, which is something that's required under the privacy rule may be adjusted to reflect the fact that a covered entity has opted to use an AI tool in conjunction with its provision of healthcare. So you might want to include a statement to that effect in the notice of privacy practices. The area where I see the bigger opportunity or challenges on the security rule side, because the literature so far, including literature from the OCR and the HHS, suggests that they're going to view these providers, and normally a lot of them are certified health IT providers already. They're going to view them as business associates, which means that you have to go through the process of vetting that business associate, making sure you have a business associate agreement in place, and again, making sure that who you're contracting with or who you're relying on has the appropriate security mechanisms in place. You know, that follows under the technical safeguards rule under the security rule requirement under the security rule. So I think that's important, and I do want to comment right now, because Ramesh brought this up, in February, the HHS and the ONC adopted the HTI-1 final rule on transparency requirements for artificial intelligence and certified health IT. So that's out there. This is a voluntary program, but these providers, these vendors who want to be certified health IT have to meet certain criteria in order to get that certification, and now they're baking into that an entire layer that deals with AI, and to Ramesh's point, a big part of that is transparency, and transparency to the covered entities and the providers and the business associates who are using those tools, where they have to come forward and identify source attributes that they're using in those AI platforms that are then used to work on the data that's input to provide the output. So it should give, you know, a good degree of openness to providers, covered entities, business associates who are looking at using these and allow them the ability to assess whether in fact that the output that they get from the input avoids things like discrimination issues and are robust in the way that they need to be robust. Interesting. And there's a very concrete answer to that question. That's very practical. Thank you. So what are the suggested actions for maintaining legal and regulatory compliance when using PHI? Are there any additional things that you'd recommend and other confidential information on generative AI platforms? Sure. And so I think the suggested action is, and one of the things that's also required under HIPAA is continuous training, internal training, one of the administrative requirements under HIPAA. So I absolutely believe that any covered entity or business associate that is looking at using AI tools, once they have an understanding as to the approach that they might want to take with that and are getting ready to launch that, need to be prepared to go out and train the individuals within their organizations about what it means to be using ePHI or PHI with these tools and what kinds of actions they should be taking prior to actually taking that step. One of those, I just touched on. If you're going to involve a vendor and it's likely going to be your existing certified health IT vendor, then you need to have a sit down, in my opinion, you should have a sit down with that vendor, get full transparency again, in terms of what their platform is, how it works, have the right people on your side and involved in those meetings so that they have a full understanding of what the implications are and proceed from there. And not a bad idea also, just to go back through your own policies and procedures, HIPAA policies and procedures. Again, the notice of privacy practices, maybe you want to make some adjustments. And the last thing I'll comment on, because I know I'm kind of going over a little bit here, but there are also state laws out there, quite a few state laws that are being passed right now on AI and they focus on transparency and they focus on the ability for someone to opt out, a patient or any individual to opt out if they learn that their information is being used in an AI platform. Yeah, we actually have a question, right? Almost anticipating that statement. Kristen, there's one from Dr. Pei there. Yeah, consent was briefly discussed if certain AI algorithms become standard of care or baked into an EHR to the point where that it's hard not to use AI, what happens when patients opt out of AI, does not consent to use AI in their care because if they want the doctor, not a machine to take care of them? That's a great question for which I have no answer today. I mean, I'm just gonna be honest about it. It's a great question. And it's anticipating something that could very well happen down the road. And I think what would happen were that to be the case, they'd have to go back into rulemaking or regulatory, look at it from a regulatory standpoint and decide if they need to amend any of the HIPAA or other applicable laws or regulations dealing with that because you're right, but I assume that any patient can consent to getting something less than the standard of care as long as they're doing so on an informed basis. Great. So it looks like there's still a few questions and with these, I'm gonna try to direct them to the appropriate person, but anybody on the panel could feel free to address. So we're hoping to generate a discussion with the time we have remaining, we still have plenty of time. So this one was probably more for Mark. If large language models are trained on existing data sets, are there any concerns about intellectual property and would regulations about IP limit the effectiveness or generalizability of AI models? Could be for anybody, but I know that you had originally brought that up, Mark. Okay, so great question. This is one of the challenges to these large models. And currently you'll see it in the news that a lot of artists are having lawsuits with big tech regarding their proprietary work. So this is being hashed out as we speak. And again, this technology is almost less than two years old, so these huge language models, they have a lot of money behind them. So, and it's gonna be hashed out in the next year or two. So I'm on the side watching as you are. So I'll make a quick comment on that as well from the intellectual property law standpoint. Mark is right, there are certainly artists and entertainers, et cetera, content creators, right? Who are very concerned right now about the use of their content, their works with large language models without their consent and without any payment for that. I am not sure that the kind of data that would be input from a healthcare standpoint would fall under any of the intellectual property protection schema such as copyright protection, because I'm not sure that data qualifies as a work under the Copyright Act, but that's something that would have to be thought through. Now, if there's some sort of predictive, I don't even know what it would be, but if there's some method, let's say that a provider has developed that they believe to be proprietary and they would not want that used in a large language model, that might fall within that category. Again, if it's subject to IP protection. So Matt, can I just throw a question back at Mark and Brian, just building on that thought process? And I know there's no perfect answer right now, but the real strength of AI is that an initial data set exists and there's continued learning over time. So the AI process is learning and something new comes up, new insights come up, which were never known before. So I think of this as a continuum, right? So if we said there's a copyright or a patent, at what point along this continuum do we go so far from that original IP protection that something new has emerged? So that's one question. But the other related question is that, and Mark, this is towards you. I mean, I worry that given that some of these AI algorithms are so deeply embedded, that how would one even ever technically figure out where along this continuum, the original algorithm changed so much that it became something totally new from an IP perspective. Yeah, so we have to look at these large language models. They're very complex, right? They're modeled after the brain, the neural networks. So it would be akin to asking, can you cut out a memory? That is so, these systems, right? The structures are so convoluted that it is impossible to roll it back. Once it has accumulated the data, it is very, very difficult to unlearn. So at this point, Brian, so what should a practitioner do at this point, given that response? Practically, what should one do as a clinician? Well, again, I think it depends on what the content is that we're talking. Okay, if it's raw data, and again, you're asking me questions that are sort of new to me, but I'll just say that if it's just raw data, I don't know that raw data is entitled to any kind of basic IP protection, unless it's a trade secret. But by virtue of you offering that into an AI model, you're not keeping it secret. So it's no longer a trade secret. Now, if, again, you have a process or a procedure that somehow you believe to be proprietary to something that you're doing, once again, if you introduce it into that, I guess if it's introduced without your permission into a large language model like that, and then there's something that's derived that comes out of that to Mark's point that is different from the original input. You know, copyright act, copyright law does provide the copyright owner with the right to make derivative works from the original work. So you might argue that this is an unauthorized derivative work off of my original work and challenge it in that way. If it's something, again, to me, it comes back to, is this a voluntary introduction of this information into an AI platform? Or was it taken from some other source, which is the entertainment industry's beef with this, is that they're not offering this information into these platforms. Somehow it's getting introduced there without their permission and without any compensation. So I think a big part of it comes down to that. And I think there's another consideration. These models are really good. I mean, really, really good at generating data and its prediction. So we may take out a patient's name, but if we don't take out enough identifiable data, the system may re-identify the patient. It is that smart. Because I put it this way, I can be able to be discriminative of a certain group of people by their zip code. And that's not protected. So that needs to also be considered. So Mark, that worries me. I'm sorry, back to you on this, because I know you're an expert in HIPAA. How would HIPAA handle this sort of a situation? Yeah, so HIPAA, de-identified PHI under HIPAA, or de-identified patient information under HIPAA is not considered PHI, technically PHI. However, if you use de-identified patient information in a platform or in an environment where it is subject to re-identification, then you have to treat it as PHI and follow the rules that apply to that. So this is an issue I've seen it discussed already in literature out there about the concerns with, like you said, Mark, these large language models and their capabilities for re-identification. So I guess my answer is, if I'm a covered entity and I think I want to use de-identified information and I'm going to do so without having to comply with HIPAA, I better think again. Now, we're starting to get a few questions. Some of them are reinforcing what you're saying and also adding a little nuance to the discussion. So trying to figure out a good one to bring up here. Could there be a time when health inequities can arise if a doctor or organization cannot afford to pay to use AI when taking care of patients? And that's for anyone. I guess it's self-evident, but it speaks to a different kind of bias that can be amplified by large language models is the fact that if some patients have access to it through their provider, but some do not, could their care pathway be different? And does that amplify inequities as well? Yeah, Matt, that's a fascinating question again. And I wish that was a perfect answer, but I would draw our thinking back to, not to hear what Brian and Mark have to say about it, but to the electronic health record process. The EHR has been around for a while, right? Most practices, whether they are solo, group, hospital-based, have access to EHR. The EHR also has, most EHRs have clinical decisions, support systems built into it to assist the physician, whether they're prescribing a medication, et cetera, what the norms are, and so on. So hypothetically, if there was a practice out there, and I worry, I mean, maybe there are many rural practices out there which may not have an electronic health record. At this point, are we creating an artificial inequity because of less than optimal care to patients? I mean, so I wonder from the EHR experience over the years, are there any lessons learned from that? Brian, in your experience, anything that you have seen that's come up? Well, I've never had to deal with that question of inequity because to me, that's maybe more of an ethical issue than a legal issue. But I will say, when you were speaking to that, and when I heard the question, I mean, what popped into my mind is, every healthcare system has different technologies. You know, and I think about cancer care. My wife went through serious cancer care a few years ago, and had we not been with the health system that we were with, I'm not sure she would have been able to have access to some of the treatment that she got at that time. So, you know, money's always an issue in terms of what can be provided within a given provider organization. And the other element of that, as we all know, is insurance. You know, how does insurance, because certain insurance companies will approve certain kinds of treatments, and other insurance companies won't approve that same treatment. So I'm not sure that there's a distinction between what was asked in that question, and what already exists out there in terms of access to the best and the brightest in health technology. Well, we have five minutes, so we have time for probably one or two. This is a different wrinkle. I think it's talking about whether or not the, maybe more for Mark to start out with, but anybody, is more about algorithms, and do they exist in a bubble, or do they have influence from other areas of sleep medicine? So the question is, from using an AI-based mask refit program, so there's, we didn't mention this too much, but there's AI algorithms that can generate recommendations for CPAP masks based on a user's facial profile for best fitting and best compliance. So on a policy level, should AI systems be required to be able to quickly implement changes to clinical practices based on information, new information? So example given is a mask refit and sizing program being modified in light of FDA manufacturer recalls. There was a recent changes and guidance given on magnet-containing masks for patients who have implants that would be affected by the magnetic field. Problems with delays and getting AI in line with new practices. So do algorithms, I guess the way I look at them is that a lot of them are based on performance. If they get information that an outcome is likely, it's gonna try to predict the next word, if it's a large language model, or the next stage, or the next, it makes that proactive decision based on everything it's ever learned. But is there's any feedback on what an algorithm, how it can be affected by other areas of medicine? So I can speak a little bit generally to this in that the algorithms are purely masked. They are pretty objective and they're there to do the best predictions that they can. And a lot of biases and issues are usually introduced in a data that is fed to them. But I wanna make probably a larger point that generative AI should be viewed as a socio-technical system, meaning that it exists somewhere. It's not a purely technical system. So there's a need for a collaboration between clinicians, policy makers, regulatory agency, in order to have like a really robust framework to govern the trajectory of this technology. So it's not purely a technical system, it's there to be molded. How it's used, we need to be able to restrict how it's used, in other words. It can't be- I think we're running out of AI. That's a good point, message to end on. So, I learned a lot from today's panel discussion and I hope you have too. So now that we've explored the issue from a legal data science and clinical standpoint, we hope you'll feel more confident navigating these concerns in your modern sleep disorder centers with some practical approaches, making your life a bit easier. And I want to send a special thank you to our speakers today who volunteered their time preparing. And for today and to everyone who participated in this to make this webinar productive. Stay tuned for other upcoming webinars offered by the AASM on sleep medicine. In about two weeks, there'll be an email sent out with instructions on how to access this webinar recording on demand. Until then, take care everyone from all of us at the AASM. Thank you.
Video Summary
In this educational event on AI and Sleep Medicine, experts discussed the challenges of navigating security and privacy concerns in the use of AI technology within sleep disorder centers. They highlighted the importance of clinicians being aware of potential biases in AI algorithms, the need for collaboration, and training when implementing AI systems. Legal implications, such as HIPAA regulations, were also discussed, emphasizing the importance of continuous training and transparency with vendors. Questions were raised about the potential for health inequities to arise if AI technologies are not accessible to all patients. The panel also considered the impact of algorithms in clinical practices and the need for a socio-technical approach to govern AI technology in sleep medicine. The event aimed to provide practical guidance for clinicians in maintaining legal and regulatory compliance when using AI in patient care.
Keywords
AI technology
sleep disorder centers
security concerns
privacy concerns
bias in AI algorithms
collaboration in AI implementation
legal implications
HIPAA regulations
health inequities in AI technology
×
Please select your language
1
English