UbiSim Webinars

Beyond Pass Rates: How AI-Enhanced Debriefing Builds Workforce Ready Nurses

How AI-enhanced debriefing makes clinical judgment visible and teachable - at scale.

Jan 27
2:00-3:00 PM
ET
Nurse educator

Inside the Webinar

Distinguish NCLEX eligibility from practice readiness
Understand how data-informed debriefing builds clinical judgment
Identify decision patterns by NCLEX Client Needs to guide feedback

NCLEX pass rates confirm eligibility but they don’t guarantee practice readiness. Employers need nurses who can think critically, prioritize effectively, and make sound decisions from day one.

This webinar explores how AI-enhanced, data-informed debriefing strengthens clinical judgment by revealing learner strengths and gaps across simulation experiences. By aligning insights to NCLEX Client Needs categories, educators can move beyond test preparation to deliver targeted feedback, coaching, and remediation at scale—while improving consistency across faculty and programs.

Participants will learn how structured, data-driven debriefing better aligns nursing education with workforce expectations, helping graduates transition from eligibility to real-world readiness.

a group of people in a room

Fill out the form to watch on-demand webinar

Register to watch

Heading 1

Heading 2

Heading 3

Heading 4

Heading 5
Heading 6

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.

Block quote

Ordered list

  1. Item 1
  2. Item 2
  3. Item 3

Unordered list

  • Item A
  • Item B
  • Item C

Text link

Bold text

Emphasis

Superscript

Subscript

Transcript

Hello and welcome.

Thank you for taking the time today to attend our webinar Beyond How AI Enhanced Debriefing Bills Workforce Ready Nurses.

Before I introduce our expert speakers, let me share a little of what you can expect from the next hour.

If you attended IMSH in San Antonio recently, you likely have noticed that AI-supported debriefing was a hot topic with dedicated sessions, poster presentations, and lively discussions throughout the exhibit hall. Today, we want to build on that momentum and leave you with three key takeaways: how AI-enhanced debriefing strengthens pattern recognition and consistency across learners, practical strategies you can implement immediately in your own programs, and how simulation data can follow learners from pre licensure through transition to practice.

I'm excited to hand this conversation over to two leaders who bring both deep clinical expertise and real-world experience supporting simulation and nursing education.

First, you'll hear from Christine Vogel, UbiSim's lead nurse educator.

Christine brings extensive experience in simulation education and has been instrumental in developing evidence based debriefing strategies that leverage AI analytics.

She'll help ground today's discussion in what this looks like day to day for both faculty and learners.

She'll be joined by Maggie Major, senior nursing simulation customer success manager at UbiSim.

Maggie works closely with nursing programs and simulation teams across the country and brings a strong perspective on how educators are using simulation and analytics in practice.

A few housekeeping items.

This session is being recorded. If you'd like any information from today, please feel free to ask in the chat and a member of our team will reach out to you.

We encourage you to participate in the polls and chat along the way and we'll save time at the end for questions, which you can drop in the Q&A section. With that, I'll turn it over to Maggie and Christy to begin our discussion.

Christy, the floor is all yours.

Thank you so much, Tracy, and good afternoon, everyone. Thank you for spending time with us today.

Whether you're joining us as a nurse faculty or a simulation leader or someone deeply invested in workforce readiness, we're really all here for the same reason, and that's our graduates and the patients they'll care for.

I'd like to begin by acknowledging a metric that remains central in nursing education, and that's NCLEX pass rates. They matter. They reflect important aspects of curricular alignment and licensure preparation.

For first-time U. S. educated nurses, pass rates consistently sit around an average of eighty-eight percent, and that's over the last three years.

And the fact that most nursing programs continue to produce strong NCLEX pass rates tells us something very real.

Nurse educators are doing incredibly complex and demanding work, and you're doing it well, often with limited time, resources, and increasing expectations.

High pass rates don't happen by accident. They reflect intentional curriculum design, thoughtful teaching, and a deep commitment to student success.

But today, I want to zoom out just a little.

Graduation and licensure tells us who is eligible to practice. Clinical judgment tells us who is ready.

Pass rates answer a very important question: Can this graduate enter practice? But they don't fully answer the questions that our workforce is asking us now.

Questions like, can this nurse recognize deterioration? Can they prioritize interventions?

Can they communicate effectively? And can they escalate appropriately and under pressure?

Now, you know this already. We're educating and practicing in an environment where complexity is the norm. Orientations are shorter, patient acuity is higher, and safety nets are thinner.

That gap between eligibility and readiness is not a failure of education.

It's what happens when practice evolves faster than our traditional metrics. And that gap is where today's conversation lives.

So over the next few minutes, I'd like to explore why pass rates alone aren't enough, why clinical judgment is the better signal of readiness, and why debriefing is one of the most powerful and under-leveraged tools we have to build clinical judgment.

NCLEX pass rates are essential quality indicators, but they were never designed to serve as the terminal measure of practice readiness.

Rather than positioning pass rates as the finish line, today's conversation invites us to treat them as just one important data point within a broader readiness framework.

From an assessment perspective, NCLEX performance reflects entry-level competence in core content knowledge, recognition of common clinical patterns, and individual performance within a standardized testing environment.

These competencies are necessary, and they align with minimum expectations for safe entry into practice.

And with the introduction of the next generation NCLEX, clinical judgment is more explicitly assessed through scenario-based items that emphasize cue recognition, prioritization, and decision making. That's a real step forward, and that step matters.

At the same time, though, pass rates still reflect performance in a controlled testing environment.

Key aspects of practice readiness, things like dynamic prioritization, escalation as a patient's condition evolves, communication under uncertainty, and situational awareness are more fully observed in simulation and in real clinical settings than we can do on an examination.

And we see this impact after graduation.

Studies consistently show that new graduate nurses report the highest levels of stress and the lowest confidence during their first year of practice.

National data indicates that up to thirty percent of new graduate nurses leave their first position within the first year, often citing a lack of confidence and inadequate preparation. They're not really citing a lack of knowledge. Now, this isn't a failure of motivation. It's a readiness issue.

So when healthcare partners talk to educators, they're not asking, what do your graduates know? They're asking questions like, what do your graduates know how? Can they recognize when something is wrong? Can they recognize it early enough?

Can they decide what matters most right now? And can they ask for help before it's too late?

This readiness gap doesn't belong to academia alone. It is shared across education, healthcare systems, and regulation.

But educators are uniquely positioned to influence it because you shape how clinical judgment is formed before practice begins.

So Maggie, I want to pause here and bring you into the conversation. And for those of you in the audience, I'd love for you to respond in the chat. The question is, where do you see clinical judgment challenges most often with learners or new graduates?

Is it with prioritization or escalation, communication, or something else? And if you could describe that in the chat.

Oh, thank you, Christy. I was jumping in there. I'm anxious to answer. If I had to pick just one of those, I would say prioritization is what I see most.

And, yeah, they know what to do if you give it on a checklist, and they can give it to you. But once they get into that situation with a breathing, talking person next to them in a patient room, a lot of those nerves and those emotions come in and the thoughts go out the window. And now they're thinking, okay. What do I need to do? What do I need to do? And they're going to turn to what comes to their mind first rather than thinking through the process. So for me, the prioritization, in the new graduate environment is extremely important, and it's a huge piece of the readiness gap.

Yeah. I would agree, and I see a lot of us do in the chat. So prioritization was definitely a choice that was chosen often. And I'm also seeing escalation in communication. I'm wondering if I should have put an "all of the above" on here as well, because these are all challenges that we're seeing. And I talk with my colleagues a lot and reflect on my own entrance into the nursing practice.

So if we take a step back, do you feel like this is really a new problem, Maggie, or has readiness always been a developmental process that's become more visible as health care has changed?

Well, I might date myself here a little bit. I think it's always been a problem, but we have changed the way we do nursing education. Way back in the day when I graduated thirty-six years ago, we actually had our last term where we functioned as part of the staff, and we did charge nurse duties. And we were literally on the floor four out of five days a week, which gave us more readiness when we graduated to jump into a floor and work.

Now we don't have those opportunities in education. And so as a result, this is an issue. It is a developmental process, but it has become more of an issue and more visible with the changes that have happened in health care education.

Yeah. I would agree with that as well. It would be really nice to offer our nurse learners those opportunities that we had at the amount of time that we had in clinical practice.

Thinking back, I remember that this was an issue when I graduated as well, and I could date myself as well, but I remember the white knuckles on the steering wheel, my first year driving and just making sure, gosh, do I know all of the things? Will I know what to do? Who is my support? And all of that.

So I think some of this has always been there. I do think that our patients are sicker. There's so much that the nurse is responsible for. And even though it's always been there, it seems like it's more critical even now.

So I'm seeing in the chat, Greg, prioritization because students don't take a whole host of patients. The nurse on the unit doesn't take the time to process with students today. And we work on this during reviewing their clinical judgment measurement paperwork on their patients. So thanks for sharing that as well. I'm definitely seeing that as well. So nurses really, they're just overbooked and overwhelmed, and they're caring for their patient, and the ratios are really tough. So a lot of times, they're not having that time to process with their nurse.

Jocelyn says, "One challenge that I see with new graduates is prioritization. Everything feels urgent at the same time. And without much real world experience, it's hard to know what to tackle first and what can wait."

Jocelyn, I agree with you absolutely. That's definitely something that I'm seeing. Maggie, would you agree with that as well?

I am right there. I was on the backside here applauding Greg and Jocelyn with exactly the same thoughts going through my mind.

Yeah. Absolutely. So if pass rates aren't enough, and we know that, you know, pass rates are a good piece of data, we've talked about what is the better signal. So I'd like to talk about clinical judgment as the true readiness indicator. And we know clinical judgment isn't intuition. It's the observable outcome of critical thinking and decision-making in context.

And that "in context" part is so important, and that definition is from the NCSBN. It uses nursing knowledge to assess what's happening, identify the most pressing patient concerns, so that's what Jocelyn was talking about, and Greg as well, and determine the best possible evidence-based response to deliver care. So here's the good news. We know that clinical judgment is teachable, it's observable, and it is measurable.

So this isn't a departure from NCLEX. In fact, it's aligned with NCLEX. And the client needs categories already emphasize judgment-heavy domains like prioritization, safety, and physiological adaptation.

But while licensure measures judgment indirectly, practice demands it continuously.

So what does readiness really look like? A workforce ready nurse just doesn't know just the protocol. They know when and how to apply it and when to escalate beyond it.

So I'd like to know how you, our audience, would describe a practice-ready nurse. So if you could share in the chat, that would be wonderful. I think we're going to have some themes here. And Maggie, I'd like to start with you. We've been talking about readiness from an education lens, but you spend a lot of time on the practice side as well.

So from your conversations with hospital systems and nurse leaders, when they talk about a practice-ready new graduate, what are they actually asking for?

Yeah. Christy, I spend a lot of time talking with our health care systems across the globe. And there's really a couple of different themes that I see coming up when we talk about what employers are most looking for.

And the key is readiness. That'll be the first thing they say, but that's not just completion. You know, they want graduates who can function safely and effectively, not just those who have checked off the boxes on the test correctly.

A second piece that they want is confidence. They're looking for new nurses who feel prepared and are more likely to stay. We know that confidence reduces burnout and turnover in that critical first year. So if we want to improve those stats, we need to improve their confidence.

And the last thing, patient safety comes up again and again.

Nurses with strong clinical judgment make better decisions under pressure. And so being able to recognize changes sooner and act more appropriately all impact our patient outcomes and patient safety. So I would say the confidence, safety, and just readiness are the three things I hear the most.

Oh, yeah, I would agree. Thank you so much. And I'm reading the chat as well as you're talking, and I'm seeing a lot of those in the chat as well. So a practice ready nurse is safe and teachable. I love that.

They're just coming out of school and they need to still be teachable. There's still a lot to learn. But they should be safe. They should do more than have just checked the boxes. They need to have some clinical judgment and a good basis for their practice. Meredith says managing multiple urgent and emergent situations in a rapidly changing environment, echoing the comments about prioritization above. Absolutely agree.

And someone who has clinical judgment, Nancy says, and is able to do good, quick assessments. This is such a key one as well. A lot of our graduates that I'm seeing, they're really, you know, stuck on what assessments need to be done. They're stuck on a head-to-toe assessment versus focused assessments and what really needs to be done for each patient as part of the first steps of the clinical judgment measurement model.

So those who continue to question things so they understand and gain confidence. I love seeing that in our learners. When they ask why, they don't just take it at face value. And I see that more and more with students, so I think that that's great.

Christy. Hi, Christy, I know you. So they have to know when to hold them, know when to pull them, when to walk away, and when to run. I love that. And I had a crush on Kenny Rogers when I was a child. I think it was the beard, and he looked a little bit like Santa Claus. But you are so right on this one.

It's so funny. Thanks for sharing that. Ability to use clinical reasoning skills and theory learned to safely and efficiently provide patient care, Amanda. And confidence.

Yes, I'm hearing a lot of the same themes. Shelly says confidence, resilience, knows limits of knowledge and experience. I love that as well. So they need to know when they need help.

They need to know that they don't know. And so I think that that's really important. Knows what they don't know, and they can respond accordingly. Oh, these are so good.

A nurse who needs minimal orientation to patient care and needs mainly orientation to the organization and facility, that is a dream graduate nurse who is ready for practice. Yes, a nurse who has had sufficient general nursing experience to care for patients without safety concerns, a nurse who can take on higher-level nursing skills with good training, but not months of additional training. Thank you, Jamie, for that. Gosh, there's so many in here.

They need to be teachable. They need to be ready, confident, seeing so many of the same things, more assessments, ones who are eager to participate and not be afraid of speaking up and learn to take constructive criticism. I love that as well. We're always learning.

We're lifelong learners. And so when our learners know that they're lifelong learners, that's a great place to start. So we know in all of these things, this is where simulation shines. And so simulation allows us to train and prepare and educate our learners to introduce ambiguity, to add in layer four clinical judgment contextual elements because context is everything.

It allows us to observe decision-making in real time and it lets us see not just what learners choose, but how they arrive there. And that's the key point. Simulation surfaces judgment, not just knowledge.

And if simulation is where judgment shows up, then debriefing is where it's built. So I want to talk about debriefing for a few minutes as well. If we could go ahead and advance the slide. Thank you.

And we know that debriefing is not the afterthought of simulation. It is the place where clinical judgment is formed and refined. So simulation creates the experience, but with our help as educators, debriefing is where learners make meaning of that experience. It's where they connect what they noticed to what it meant and where they examine the decisions that they made and where they begin to shape how they will respond the next time they face that situation or a similar situation in practice.

Now clinical judgment doesn't develop simply through exposure. It develops through guided reflection, feedback, and pattern recognition over time. So we're really going to be focusing on that pattern recognition over the next few minutes. Now, the literature consistently shows that a well-structured debriefing improves clinical reasoning, decision making, and learner confidence.

Effective debriefing for clinical judgment often centers on what cues learners noticed or missed, how they prioritized competing demands, and when and why they chose to escalate, delay, or act independently.

And yet, even for experienced nurse educators, debriefing is hard.

As I'm describing these challenges in the next moment, I'd like you to reflect on what makes debriefing hardest in your setting right now. Maggie, I'd love to hear from you.

A couple things. It's one, making sure I'm addressing the issues that need to be addressed. You know, am I missing something? Am I looking at it through the wrong lens? Should I be looking at it in a different way?

And the second thing is approaching the learners in a way that they can take what we get out of debriefing and apply it, and they don't feel like they're being judged or evaluated, turning that debriefing into a positive growth session. So I think those two things are probably the most difficult things I have when working into briefings.

Yeah.

Thank you for sharing that. I agree. And a lot of what you're speaking of is that psychological safety. And this is really tough.

We're working with groups of learners time and time, day after day. And it's really hard to adjust when they are different groups, they're doing different actions. We're trying to keep track of all of that. Time is really tough.

I'm looking in the chat as well, and I'm seeing a mixture of a lot of answers. I'm seeing a lot of D, all of the above. So that's what I was going to answer as well, D. And then I see Christy mentioned using a framework.

So making sure that your institution and faculty are using a debriefing framework, and that would be according to INACSL Health Care Simulation Standards of Best Practice, and even training all of our faculty to debrief in a standardized way. Not a robotic way, but in a way following that framework so that learners know what to expect that coaching session after. Thank you so much for sharing all of these answers as well. Christy, you also mentioned sim trained faculty, and that's a tough one because we know that there's a lot of turnover with faculty.

There's the shortage. That was one of my most challenging pieces when I was a simulation director, was getting everybody trained with simulation and debriefing and making sure that everybody had that standardized framework as well.

So most of us are working within really, well, just difficult constraints.

Time between scenarios is limited, and we're observing multiple learners. We're managing the environment. We're tracking objectives, maintaining psychological safety all at once. I I see a juggle right here, and it's it's a lot to manage.

Under these conditions, we rely on what stood out to us in the moment and on what learners chose to share during this discussion. And this introduces a few challenges. So first, the time pressure that I mentioned. This means we often have to make rapid decisions about what to debrief without the chance to step back and see the full picture of learner performance.

Second, subjectivity and unintentional bias can play a role and our attention naturally goes to the loudest moment, the most dramatic error, or the most vocal learner, while quieter but equally important patterns of clinical judgment can slip by unnoticed.

So some facilitators lean toward emphasizing strengths, and I have been guilty of that in the past as well, and others lean toward error correction. Both approaches are well intentioned, but they lead to very different learning experiences for our learners. And finally, inconsistency.

Two learners may demonstrate similar judgment patterns but receive very different feedback depending on who facilitated the debrief or what stood out in that session. So this brings us to a very important insight.

Without clear visibility into performance patterns, even experienced nurse educators can miss what matters most.

And this is not a critique of educator skill. It's an acknowledgment of cognitive load and human it's limitation in very complex teaching environments. This is not a failure of facilitation.

It's a visibility problem, and AI can help with that. When those patterns become visible, debriefing shifts. It moves from a general reflective conversation to a targeted developmental opportunity, one that intentionally prepares learners for real world care.

But the challenge is seeing those patterns consistently is really it's hard to do in real time. We really go from the simulation oftentimes right to the debriefing. And a lot of the times, we're taking notes and trying to figure out who did what. It's really tough to see those patterns.

But again, this is where AI can help.

AI doesn't change what we value in debriefing. It changes how clearly we can see it. So by reducing our cognitive load, surfacing quieter patterns, and supporting consistency, AI allows educators to spend more time doing what we do best, and that's coaching clinical judgment. That's what we love to do anyway.

And that brings us to the next part of our conversation, how AI-enhanced analytics can support but not replace expert educators by making those judgment patterns visible, actionable, and scalable. So I'm gonna hand it over to you, Maggie.

Thank you, Christy, and thank you to all of you who are giving the comments in the chat. Keep them coming because I love it. This is a great segue right into what I'm going to talk about. But just as a recap, you know, Christy established how important debriefing is in developing clinical judgment. And even the most experienced nurse educators face challenges, and these can include limited time, our memories, which mine is getting shorter and shorter, and bias during simulations, as well as just I saw someone mention that some people just are not as skilled at facilitating as others. So then the question here becomes, how do we strengthen our debriefing in a way that honors our expertise while addressing these challenges?

Well, that's where AI-enhanced analytics comes in.

It's not, I repeat, it's not a replacement for nursing judgment, but instead it's a powerful tool that makes clinical reasoning patterns become more visible.

So that we're all on the same page here, let's talk about what we mean by AI-enhanced analytics.

It's not about automating or replacing the human element of teaching.

It's about surfacing those patterns in clinical judgment that might remain invisible or might require hours of manual data compilation.

So think about it this way.

When you're debriefing a simulation, you're relying on what you've observed, what you remember, and what learners choose to share during debriefing. But what if you could clearly see how that learner had been making decisions across all of their simulation experiences? What if you could instantly map their strengths and their growth areas directly to the NCLEX client needs categories?

Well, that's the power of AI -enhanced analytics. It's going to do three critical things.

One, it surfaces patterns in the development of clinical judgment across one or more simulations. So you'll see, did this learner consistently recognize early warning signs of deterioration? Did they implement interventions appropriately? And then these patterns tell us a story about their clinical performance and their reasoning.

Second, it maps strength and growth areas across the NCLEX client needs. So things such as safety and infection control and physiological adaptation. This alignment is crucial because it connects our simulation performance directly to both licensure expectations and real-world practice demands.

And third, it supports a structured and focused debriefing conversations. Instead of educators wondering, what should I focus on in this debrief? The data guides the conversation towards what matters most for the learner's development.

So imagine pulling up a dashboard before your debriefing session and seeing that a particular learner excels at recognizing changes in the patient's status. Man, they're right on it. But they are consistently delaying calling the provider.

This visibility enables you to give precise feedback, coaching and remediation. You're not just saying good job or you need to improve your communication. You're saying your assessment skills are solid, but we need to build your confidence in speaking up when you sense something is wrong.

So this approach supports targeted debriefing without increasing your burden as a facilitator. You're not spending hours compiling spreadsheets, analyzing numbers to try to remember the details of what you saw during the multiple simulations that you looked at over the last couple of hours.

Instead, the analytics is gonna organize that data and allow you to bring in your nursing expertise and include the relationship of the learner in the process.

And as Christy mentioned before, consistency is another big benefit. When our debriefing is guided by objective data, the quality of the feedback becomes more consistent across evaluators, cohorts, and semesters.

Now, it doesn't mean debriefing becomes robotic or scripted. It just means you've got a consistent foundation. The approach remains learner-focused and responsive to each learner's needs, probably even more so than before you had the analytics.

AI-enhanced analytics does not replace the art of debriefing. Instead, it brings in the science and it strengthens it. It gives us the objectivity we need to meet every debriefing conversation count.

So, Christy, you have seen debriefings across many programs. What's the one piece of educator feedback that you believe AI-enhanced analytics could most reliably assist with?

I'm thinking about how we want our learners to practice good practice. So we give them all of the information that they need prior to the scenario so that they can care for the patient and the family and practice good practice. I want to debrief with good data. So it kind of seems a little bit aligned there. I think consistency is key here, and I think having consistency across educators would support educators with the high amount of cognitive load that was mentioned earlier, and also the standardization that learners feel psychologically safe. They know that we're, you know, going to be debriefing in a very standardized way using a standardized framework, using data from that scenario as well.

So I think that would be my answer. How about you, Maggie?

Yeah. I agree. The the whole data piece to me, I am a data geek. And so I said I would spend hours looking across spreadsheets and numbers and compiling them. And this just really makes my life a lot easier.

I do see some questions coming up in the chat. I am gonna remind you to go ahead and post your questions in the Q&A section because we're going to do some Q&A at the end. But I am going to go and address one of them that I saw come up here, and it said, how many sessions does a learner need to complete to get this feedback?

The truth is you can get it with just one session. And then the more sessions you do, the more the data comes in and the more you get a better picture. But, literally, from one session, you have data that you can work with.

So when we talk about AI-enhanced debriefing, it's easy to kind of think of it as a classroom tool because I think that's where our heads naturally go. We think of it as something that lives in the simulation lab and ends at graduation.

But I'm here to tell you it extends beyond the classroom. It extends into the entire workforce development continuum.

So I'd like to take a minute to walk through what this looks like at each stage of that learning continuum and the workforce development continuum and why it matters for both academia as well as for employers.

So first we'll look at the pre-licensure phase. In pre-licensure programs, one of the most valuable applications of AI-enhanced analytics is the early identification of gaps in clinical judgment.

So here's the problem. In traditional approaches, we might not see a pattern of our learners' decision-making until late into the program. We might be into their next-to-last semester before we start to see these patterns. And sometimes we don't even see them until they're in their capstone or maybe a preceptorship in the clinical area. And at this point, our remediation options are really limited.

But with the analytics, the patterns become visual, visible much earlier. We can see them at an earlier stage.

So if a learner is consistently strong in assessment, but struggles with prioritization as many of us have indicated here is an issue, that insight might emerge in a matter of a couple weeks rather than months.

Think of this in our short accelerated programs and how helpful it would be there. You know, as a faculty, we can now intervene with that targeted coaching and add additional practice opportunities before our learners reach those high state clinical experiences and before they sit for the NCLEX.

But I want to caution you, it's not about labeling our learner as being at risk. It's about providing targeted support that every learner has areas that need further improvement. Analytics help us identify those areas quickly and respond appropriately.

Now before I talk about the transition to practice phase, I'd like you in the audience to weigh in on another question.

So what do you believe is the hardest part of onboarding new grads?

A, they don't know what their gaps are.

B, we have generic onboarding plans that don't fit.

C, they've had inconsistent pre licensure preparation.

Or D, the time required to assess judgment.

So Christy, I'm going to ask you to respond to that question while the audience is providing their thoughts. What do you think?

Sure. I I feel like sometimes we need an all of the above again.

But I'm looking at A, they don't know their real gaps. And so learners, they're doing the best they can. They're learning the content. They are doing the exercises that we've provided for them. They are attending simulations. And for the most part, they want to be so prepared.

And I just love that about a new nurse.

But sometimes they don't know their real gaps. You don't know what you don't know. So I really like them being able to see their own data with these AI insights as well so that they can have objective data and they can track their own data across their simulations as well, not comparing themselves to others, but only looking at their own data and really seeing where they can improve.

So I think it could help them know their gaps as well, but honestly, I could speak to B, C, and D as well.

I agree. And I purposely didn't put an all of the above because was trying to make people make a choice, but that's just me.

You know, Shelly had a really good point here and that this really applies all across the continuum, and that's what I said, not just in new nurses or learning nurses or pre licensure. It goes way beyond that to the whole continuum. It's all part of the human experience.

Trying to get the word out. So A and B seem to be the most popular, although all of the above became the right in vote on this category.

Another point that Joyce had is that onboarding is inconsistent and the number of preceptors and the time being cut short in orientation. Yeah. Because it's like, hey. We need you on the floor. Let's get out there. So another factor that comes into that particular phase.

So this is really, as we're seeing, where that gap between eligibility and readiness becomes visible.

And I know from our employers, I consistently hear graduates arrive with their licenses in their hand, but they lack the confidence in that clinical decision making and prioritizing under pressure and in knowing how and when to escalate.

So when nursing programs are using those AI-enhanced analytics throughout their curriculum, the graduates arrive at their first jobs with a different foundation.

They received consistent data-informed feedback about their clinical reasoning and performance and they've learned how to recognize their own patterns, identify their strengths, and actively work on their growth areas.

Earlier, I saw Susan mentioned that constructive criticism piece. So now they know how to deal with that better and they can self reflect.

For our transition to practice programs and our nurse residencies, the AI-enhanced analytics creates a little different starting point now. Instead of starting with those generic competency checklists, doctors and educators can now focus their efforts on specific areas. Where does each new nurse need support? The thing that we don't have the time to do when we're dealing with those generic checklists. If the data shows that the new graduate excels at assessment but needs more practice and prioritization, we can now focus on those skills.

And from the employer's perspective, the need is really kind of straightforward. They need nurses who think, prioritize, and can act from day one. Now, this doesn't mean that our new grads are all going to have the wisdom of a twenty-year-old or more veteran. It means they arrive with that foundation that's been intentionally developed, consistently reinforced, and objectively measured. They've practiced recognizing deterioration, prioritizing interventions, and escalating things appropriately, not just in theory but in realistic simulations with structured, consistent feedback.

One of the most significant benefits is that consistency across cohorts and educators. When clinical judgment is developed by guided data and it's aligned to the standards, there's less variability in what the graduates know and can do. So our employers don't have to worry whether a new hire came from a program with rigorous simulation debriefing or with a more informal debriefing. That alignment between the education and the workforce expectations becomes so much tighter, and it's much more predictable, and gives them more of a reliable pipeline of practice-ready nurses.

So bottom line, AI-enhanced debriefing creates a cross-continuum view of clinical judgment development and it benefits everyone, the learners, the faculty, the new grads, the employers, and ultimately, what we're all concerned about, our patients.

So when we see patterns earlier, we can intervene more precisely and align education with workforce needs.

Ultimately, we build a stronger nursing workforce.

I do like this comment from Shelley, "We have quite a bit of production pressure in practice.

Shelley refers to it as "Instant nurse thinking. Just add water."

Yeah. I agree with you completely, and it's just not the way we are built as humans. So we've gotta come up with the tools and use the tools to get around that.

We've covered a lot, but I want to make sure I give you a couple of takeaways. One, shift debriefing from general reflection to targeted development. Instead of asking all the questions limited to how do you feel, what would you do differently, use more questions that are targeted that encourage deep analysis.

So ask questions like, look at your performance across the last three scenarios. What did you notice? What patterns stand out to you? This is going to shift that debriefing from just feelings to including decision-making behaviors.

Two, use analytics to guide your conversations. Don't label those learners as good or bad. Instead, use that data as a mirror that reflects the patterns that you're seeing. Say things like, here's what we see across your scenarios. You're consistently strong in X, but we're also seeing a pattern where Y is happening.

Let's explore why that might be and have them look at the data and then reflect.

Three, support your faculty consistency with shared data. The beauty of this data is that everybody can take a look at it. Share it across the faculty. Share it across the staff. When everybody has access to that data, not only does your debriefing become more consistent, but you can use it to improve your program and your metrics across your curriculum.

And fourth, build learner confidence with transparency. Let let those learners see their data. Let them own it. It is their data. They own it. Help them move from a passive role to an active role.

That transparency is going to empower them on their growth. It's going to reduce anxiety because they know what to expect, and the ability to think about one's own thinking, which is critical, becomes part of that clinical judgment.

So, Christy, if you had to choose one of these steps to implement first, which one would you choose?

I want all of them. Christy tends to want it all, but I will choose. Insight number one really speaks to me in just years of faculty development and debriefing. And if debriefing is where this clinical judgment is built and it can be argued as the most important part of the simulation, we need to be just really harnessing that time and having the most effective debrief that we can.

And we know we've talked about why that's a challenge. But to really shift debriefing from a general reflection to a targeted development really harnesses the power of the debrief as well. So with whatever debriefing framework you choose, there is a portion to be looking at some of this data. So I'm used to using PEARLS.

So after we unpack those feelings and talk about their summary and how they saw the scenario, there's an advocacy inquiry part. And that is where that data can really help with. And looking back, I really wish we had this kind of data from myself, for my faculty, for consistency.

So I kind of cheated in my answer because I think I've said two insights in here as well. But debriefing can be such a missed opportunity sometimes, and we want to make sure that we're utilizing that debrief time to build clinical judgment. So I really see that that's something I would do first.

Thank you, Christy. I appreciate that. You know, I've worked with Christy a while now, and when she says she wants it all, I agree. She wants it all. But all of us as nurses feel that way. We want it all, and we want it now.

I liken to Veruca Salt from Willy Wonka. I want it all, and I want it now. So thank you for listening to us and working with us on the power of enhanced AI-enhanced debriefing. I'd like to invite our moderator, Tracy, to come back in and help us with Q&A. If you have questions, please put them up, and we'll address some of them for the rest of our time together.

Alright, I'll read out the first one from Amanda. Does it track compassionate care and communication as well as clinical skills?

Maggie, do you want to take that one, or would you like me to take that one?

Go ahead, Christy. I'll let you take that one. I'll take the next one.

Okay. That sounds great. So currently, it captures critical actions and performance gaps. And we're having a lot of conversations right now about capturing communication. This is really tough for AI to capture communication consistently.

And so we're having a lot of conversations about that right now because we want to make sure that what it's capturing is correct. So currently, it's capturing the critical actions of the learner, so that would be the assessment skills, that would be the interventions, it would be, you know, all of those sorts of things. The facilitator can still coach in the debrief about the communication, not only the words, but also how that communication went, body language and all of those things as well. So to answer your question a little bit more in a summary, not yet.

Right. Maggie, do you have a response to that as well?

No. Christy summed it up perfectly.

Okay. One more for you, Christy, and there's some version of it in the Q&A as well. But how do learners respond when they see their own performance data over time?

Oh, that's a really good question. So when performance data is used well, and I think that that's the key, learners respond positively because the key is how it's introduced.

So data on its own is not the debrief. It's a tool within a structured, psychologically safe conversation.

So when learners review their own performance over time, and I mentioned it a little bit earlier, but not in comparison to peers, something important happens. They stop fixating on individual mistakes, and they start seeing their patterns in their thinking, decision-making, and actions over time. So this ultimately can build self-awareness and confidence, and it strengthens reflective practice.

So instead of learners asking, you know, what did I do wrong? They begin asking, what am I noticing sooner? What am I prioritizing differently? When am I escalating more effectively? So that kind of longitudinal objective feedback can help learners take ownership of their learning and apply insights directly to future clinical situations. So I think it's important to house the data within the psychologically safe, effective debrief that follows the standards.

That's great. Thank you for that.

Maggie, would you mind taking this one? How do you bring faculty with different debriefing styles onto a more consistent approach without losing their individual teaching strengths?

This is, again, where the AI-enhanced analytics comes in because you can bring everybody into the room. Bring them in over a lunch. You know, lunch attracts everybody. And just spread the data out on the table and start looking at it. Actually, I created a war room. I called it the war room, and we just put the data all over the walls. And people could just kind of walk through at any time when they had a few minutes to look at it.

And then we got together and we started discussing it. And really, when people start discussing and the faculty gets in and starts sharing, the ideas come out and they actually inspire each other.

So, they got tips and tools from each other that they incorporated, and they had they were able to look at things more consistently, but still felt like they were keeping their own identities. As well as when they're working individually in those one-on-one coaching sessions, only that individual faculty member knows what that particular learner needs at that time.

On that note, it also decreases the anxiety amongst the learners because they know that they're not going to have to perform differently for instructor A versus faculty B, because they're going to get that more consistent look and evaluation on them. So it becomes less threatening, creating a more learning conducive environment for the learners too.

Great. Thank you.

We are at time, but I want to take this moment to thank everybody for attending our webinar. If there's any more information, please feel free to reach out. We're happy to share whatever you need.

Thank you, Christy and Maggie.

Speakers

User IconChristine Vogel headshot

Christine Vogel

,

 

MSN, RN, CHSE, CHSOS

Lead Nurse Educator, UbiSim

Christine Vogel is a clinical nurse, simulationist, and nurse educator who believes in the capacity of every nurse learner to realize their full potential by engaging in deliberate practice and choosing to start with the "Basic Assumption" that everyone is intelligent, capable, cares about doing their best, and wants to improve. As Lead Nurse Educator at UbiSim, Christine is actively engaged in designing, piloting, and evaluating evidence-based immersive VR simulations for nurse learners. In addition to her 25+ years in nursing, she has over a decade of experience in nursing academia where she developed, facilitated, and evaluated high-fidelity simulations in virtual reality as well as other modalities.

User Icon

Maggie Major

,

 

RN, Ed.S.

Senior Nursing Simulation Customer Success Manager

Margaret "Maggie" Major is a dynamic educator and innovator in the field of nursing education. With an Ed.S. in Education specializing in Educational Technology from Walden University, Maggie brings a wealth of experience from her roles in secondary and post-secondary education. Her expertise spans curriculum development, online learning, and educational technology integration, making her an invaluable asset in advancing nursing education through cutting-edge simulation software. Her background as a Nurse Aide Program Coordinator and long-standing Adjunct Faculty member at Harrisburg Area Community College has given her unique insights into the evolving needs of nursing education. By championing the use of simulation and technology in nursing education, Maggie is playing a crucial role in shaping the future of healthcare education, preparing nurses who are confident, competent, and technologically adept for the challenges of modern healthcare delivery.

Watch This Webinar On Demand

How AI-enhanced debriefing makes clinical judgment visible and teachable - at scale.

Join Us

Distinguish NCLEX eligibility from practice readiness
Understand how data-informed debriefing builds clinical judgment
Identify decision patterns by NCLEX Client Needs to guide feedback
a woman in scrubs using a laptop

Master Real-World Skills with Realistic Sims

Thinking like a nurse starts with feeling like a nurse. Discover the UbiSim difference in truly immersive VR.

A group of women in medical scrubs sitting together at a table, engaged in a professional discussion.