
Hot or Hype
Welcome to Hot or Hype, the podcast where hosts Olivia Storelli and Andrew Stevens break down the latest trends in cloud computing, data engineering, AI, and cybersecurity - and decide whether they're game-changing innovations or just industry buzz.
In each episode, we dive into topics like AI-driven predictive analytics, cloud-native architectures, zero trust security, data mesh, and more, giving you real-world insights from practitioners who know what works and what doesn't.
No fluff. No hype. Just honest, technical discussions to help you navigate the fast-evolving landscape of tech.
Subscribe and follow along as we call it like it is: Hot… or Hype?
Hot or Hype
Hot or Hype Ep 8: AI & Governance for Mental Health and Social Impact | Guest Tim Turner of Thresholds
Hosts Olivia Storelli and Andrew Stevens unpack the promise and pitfalls of generative AI in mental health care with Tim Turner, Vice President of Business Insights and Analytics at Thresholds, weighing real benefits in admin efficiency against serious risks at the point of care. Clear guardrails, strong governance, and a human-in-the-loop approach emerge as a durable path to trust.
• Thresholds mission to provide healthcare, housing, and hope for thousands of persons with mental health
• Do consumer chatbots create echo chambers and clinical risk
• HIPAA and PHI concerns with generative AI tools
• human-in-the-loop as the baseline for safe clinical use
• practical AI wins in documentation and admin workflows
• building literacy with policy, training, and cybersecurity
• strategy, discovery interviews, and a roadmap for adoption
• mission principles of equity, justice, accountability, transparency, trust
• hot or hype: ethics early and privacy tech hot, direct-to-patient AI hype
• governance, data culture, and cross-functional rollout as enduring trends
Join Hosts Olivia Storelli and Andrew Stevens weekly reviewing trending technology.
www.sakurasky.com
Welcome back to another Hot or Hype. Today we're joined by Tim Turner, and we'll be talking about the promises and the challenges of AI and social impact and mental health. So welcome, Tim. Do you want to give us a few words about who you are and what you're working on?
Tim Turner:Yeah, my name is Tim Turner. I am vice president of Business Insights and Analytics at Thresholds. We are a healthcare nonprofit in the Chicagoland area, and we work with individuals who have uh severe mental illnesses and support them with community-based healthcare, uh so offering things like case management, behavioral health, uh housing services, employment services, and all other forms of wraparound services for that community.
Olivia Storelli:Great. Thank you. We're also joined by Andrew, who's a usual uh member of this.
Andrew Stevens:I'm here as here as almost always.
Olivia Storelli:So today, uh diving into generative AI's role in delivering support for social impact. We've all seen on the news quite a number of recent articles where this has gone really, really wrong. And, you know, Chat GPT and OpenAI had to come out and talk about when the guardrails didn't work with within what they've rolled out as a consumer product. And I guess, you know, there seems to be somewhat of a temptation with something like healthcare for it to be 24 by seven, totally accessible, you know, it's something that will always be there. And the temptation is is about just putting a skin on that and rolling that out, and then suddenly you've solved a lot of the access problems that that your constituents kind of have. Does that sound like a really good plan to do, Tim?
Tim Turner:Uh I I'm already getting hives just hearing that uh that description of, you know, just sticking a skin on ChatGPT and letting it run wild with all different types of mental health conditions. Uh no, I think it's really important for organizations and and even individuals to be really cognizant and thoughtful about um the limitations of AI, in particular in the mental health space. Uh certainly is there there's a lot of temptation given uh limitations with access and given challenges with people being able to get uh access to therapeutic care. And it may seem like it's easy enough to expand that access uh through tools like ChatGPT or other generative AI tools. Um, but there are a lot of reasons to be concerned. You know, there's certainly uh one, just the extent to which you know putting in sensitive data, whether you are a protect or practitioner who is supporting your patients or even yourself as an individual, um, there's concerns around putting in your personal data, HIPAA protected data into these tools. Also the extent to which you know a lot of these AI tools are specifically geared towards giving you a response and giving you an information that you want to hear and less of what you need to hear in order to actually address the conditions that you may be treating. And so I think there's a lot of ways in which we should be uh conscious about that.
Olivia Storelli:That's really interesting. So the echo chambers like we all get on social media, for example. Um, is that something, you know, conceivably it could be an echo chamber and send someone all the way down the wrong direction because it's reinforcing almost positive reinforcement of the wrong thing.
Tim Turner:Yeah, absolutely. And we're particularly sensitive to this as an organization who supports and works with individuals with severe mental illnesses. And so we certainly wouldn't recommend or encourage, you know, any of our patients to just go directly into an AI tool in order to seek treatment because you know, who knows what types of reinforcement messages or what types of information they may receive through that, uh, through that particular tool. You know, I think there are other ways in which you know AI can help support and bring in an alleviation of administrative duties and can help clinicians, you know, in the process of document uh writing their clinical documents or or expressing or doing other more administrative tasks, but from a clinical perspective and from a therapeutic perspective, uh we believe it's really important to keep that human in the loop and to have that person operate at the top of their license as a human therapist with their expertise to support the members that we work with.
Olivia Storelli:So uh the organization that you work with is quite large as far as there's many different functional areas, many different levels of that and and different duties that happen within that. How do you get everyone kind of the literacy that they understand that they can't just go back to their desk and and put sensitive information into Chat GPT?
Tim Turner:Yeah, well, I think first it starts with us having uh we have a robust uh process for developing policies and the governance infrastructure. Um we are last year we developed an generative AI policy and released that to the organization. We do a lot of trainings, especially now during Cybersecurity Month, where we release content to the agency so uh folks know about the risk of putting PHI into these tools, uh, and also that they are aware of the ways in which the shortcomings of these technologies and and can understand the implications of using them in an improper manner. And then really on top of the governance and policy infrastructure, we take it very seriously to have a planful and strategic way to approach how we want to consider bringing AI into the organization. Um so we've gone through the process of doing discovery interviews, uh, research, we've talked to leaders, we've talked to frontline staff to really understand what are some of the pain points and the challenges that they experience on a day-to-day basis. And now we're developing a roadmap to identify tools that can help address those pain points and producing and bringing them to the organization in a way that's governed, uh, that's monitored, in a way that's ultimately going to protect our organization and and and primarily protect our patients uh first and foremost. So really having that planful strategic approach is really important.
Olivia Storelli:Yeah, that makes sense. So, really, once that strategy is set up, that really needs the foundation from a governance and a guardrail perspective for that to be effective, right?
Andrew Stevens:You can't retrofit ethics or you need to be doing it from the beginning. If you don't do uh your policies now, someone's gonna come and regulate you at some point. So getting them done, getting them early is fast. And you've got examples out there to learn from. So definitely worthwhile looking at. And uh I I love the product approach you're taking on that. So that's a fantastic.
Tim Turner:Yeah, and one other thing I'll add is that I think it's also really critical, and we've adopted this as well, just having an overarching philosophy on how AI can help support your mission and support what we do. Uh so the principles of equity, justice, accountability, transparency, trust, those are all things that are very important to what we do as an organization. And ultimately, you know, supporting the communities that we serve and making sure that any technology or any uh solution that we're considering or that we're developing is all in service of the mission. Uh so I think being able to express and have a point of view about the ways in which AI could help advance our mission, whether it's you know our organization or really any nonprofit organization, that's a really important foundational step as well.
Andrew Stevens:Yeah. I'm gonna go out on a limit with hot or hype here. I'm going to say uh early ethics builds long-term trust, and that is absolutely hot. Sorry from a uh hot or hype perspective. Uh data literacy equals inclusion, empowerment, uh, training uh without mission alignment is pure hype. So data literacy is absolutely hot in all spaces and all industries, and I think uh the staying power of AI or privacy preserving AIs, such as federated learning, uh differential privacy, uh bias auditing, explainability tools are hot as well. Those are my three hot calls.
Olivia Storelli:You're cool.
Andrew Stevens:Yeah.
Olivia Storelli:All right. Well, Tim, what are your thoughts from a hot or hype perspective?
Tim Turner:Yeah, yeah, absolutely. Uh 100% agree with that, Andrew. You know, I think uh I certainly feel that from the standpoint of you know AI is the solution to our uh mental health access challenges, I would say that's a hype, you know, not something that I would say is is uh in the hot category. But I think uh the opportunity for nonprofit organizations to be at the forefront of developing the right strategies, governance, policies to ensure that AI is ethically and equitably used across the landscape and can help advance the missions of other uh mission-driven organizations. I do think there's uh great opportunity there. And so I put that in the hot category.
Olivia Storelli:Yeah, no, that's great. Uh thanks, gents. I think that makes a lot of sense and I agree with both of you on both the hot and the hype. Definitely if it's in the news headlines, it's it's gonna be on the borderline hype, right? There's a lot of research going into that right now, and it doesn't look like it's necessarily yielding the results. And we've heard it from an expert that, you know, that's your take on it. As far as the HOT's concerned, what I do find interesting is from an organizational change data and data governance perspective, and just data culture and data literacy, that you're proving the point. AI needs to be rolled out in the fabric of an organization. It's cross-functional, it needs to be attached to use cases. Those use cases need to be iterated upon until they become innovative enough to be deemed to go into their own life. So I do think that there's a lot of opportunity in that space. And listening to you talk made me realize that a mission-driven organization has a little bit more luxury there because everyone is very aligned on their mission. Whereas some organizations, there's a little bit of a tide of disagreement that may sit under the hood or you know, their product market fit isn't necessarily as aligned amongst their executives as they like to think that it is. So I think, you know, you're you're ahead of the game there. And obviously, from a data perspective, you know, if you don't do data well, you you'll find it much harder, especially on the deterministic data, to bring it with what large language models do best, which is proper ballistic. So long-winded, we have we have hype on using it on the consumer end, and we have heart for everything behind the scenes.
Andrew Stevens:Governance. Governance, privacy, security, none of that's sexy, but they're the only trends that never go out of fashion.
Olivia Storelli:And that's what works. So thank you so much for joining us today. It's always lovely to talk with you and and hear what insights are from your world. And and thanks, Andrew, for showing up.
Tim Turner:You're welcome.
Olivia Storelli:You've had another hotter hype. Thanks, Tim. Thank you. Bye.
Tim Turner:Bye. Thank you.