We sat down with Worrell’s UK Human Factors Engineering experts for a roundtable conversation about the European MDR. Led by Nick Choofon, the group discusses the most common questions we’re hearing on the subject and offers some helpful tips along the way.

NICK CHOOFON
Director of Human Factors Engineering

CAIT McCARTHY
Human Factors Engineer

MATT HYLAND
Senior Human Factors Engineer
What is the MDR? What’s the difference from the former MDD?
Nick 0:00
Today, we’re going to talk about what is the MDR from a high-level perspective for anyone who isn’t familiar with it, and when and why those changes are coming into play. Secondly, how our clients are implementing processes related to MDR? And how that affects the usability engineering file specifically. And finally, how Worrell have been working on these projects specific to MDR post market surveillance and continuous improvement and the process behind it. So to start off with, I am going to pass you over to Cait, to start off with, would you mind informing everyone what is MDR? When it comes into play, and why it is important for our clients?
Cait 0:43
Yes, of course. So, the MDR, which actually stands for your Medical Device Regulations is a transition from the MDD, which Medical Device Directive. For the MDR, or it’s a first formal publication actually went into effect in 2017. So, all the documents relating to the MDR will be classified in 2017. But its first implementation was meant to be in 2020. But due to COVID 19 affecting everything, including the MDR being implemented, it was implemented in May 2021. And this was a time period which manufacturers had to adhere to those regulations enforced by the MDR.
The biggest difference let’s say from the MDD to the MDR is that robust information, and expectation for both post market surveillance and vigilance requirements that the MDR now expects. This is highly illustrated by the sheer volume of information that the MDR actually provides us compared to the MDD. When I say post market surveillance, what we’re focusing on a little bit more even today, is looking at how we can analyze products, which are actually in the market today, and how well these are performing and ensuring that there’s no use effect or harm being presented to our end users. This, of course, is also a lot more reported and expected by the current regulations of the MDR. But for our clients really focusing that MDR is now going to be highly evident for post market surveillance and for vigilance requirements moving forward.
Why Should I Conduct Post Market Surveillance?
Nick 2:18
So as Cait’s highlighting there, the area that our team have been focusing on is post market surveillance, and the importance it plays in monitoring the safety and effectiveness of medical devices that are currently on the market. The current HF standard provides little guidance on post market surveillance and how that should be performed. Now what we’re seeing is not only the manufacturers need to conduct premarket investigations of the performance and efficacy of the device, but they now also need to find sufficient post market surveillance and that means setting up a robust plan for monitoring their data on devices that are already on the market. Clients are specifically doing this as they’re identifying devices on the market that were released before 2015 when 62366 was released, and identifying that there is no HF data available, or devices released after 2015 have little or weak HF engineering files. And these clients have been proactive and incorporating continuous development to ensure that devices are causing no risk to users and are usable and effective.
Nick 3:28
It could be argued that it’s just opening a can of worms? But as Cait was saying, there’s much more focus than the MDD to have a post market surveillance plan in place and manufacturers will have to submit that data to the EUDAMED database, which has been developed according to the MDR. And this database is similar to the MAUDE database that we see in the US where complaints and adverse events are listed. And in general, it’s just a product lifecycle responsibility. So, clients are keeping this surveillance and how users are using the device on the market. And they’re proactively gathering that data rather than just reviewing say adverse events or customer complaints.
What Steps Should I Follow?
Nick 4:14
So how have clients been doing this? So, breaking this down into some simple steps, clients have been assessing their product portfolio, determining the level of risk for those devices related to how much usability has been conducted, E.g little to none. And also, the classification of the device, is at a Class I, Class II, or Class III device. And so, they’ll review their risk management file and the data received on the market and then dependent on the above inputs clients will conduct supplementary usability evaluations to support their data or file for post market. These usability evaluations can come in many forms. Recently, we’ve helped clients conduct validation studies on devices already on the market, or even in-home ethnographic studies, so studies focused on gathering data from participants using devices in their home and receiving feedback on them compared to a a simulated setting. As many of these products on the market have had no HF data, we have been helping our clients cluster numerous devices in one system and conducting human factors evaluation on the system to support the surveillance, by grouping devices, this is a much more efficient approach to test many devices collectively that have been on the market for many years.
4 Tips for Conducing Successful Human Factors Remediation Work

Matt 5:50
Having run a few remediation projects already now, we’ve learned quite a few lessons, which I thought would be good to pass on here to those that are listening. So, for those of you that are looking to embark on running a remediation project where additional HF work is required. And what that means is, so you’ve established that following the [UOP] process there that Nick touched on earlier is not applicable for your device or devices. I’ve got four main areas are for helpful tips for a smooth project.
Be Honest When Reviewing the Quality of Your Existing Files
Now, just like many of our clients, you may well have access to old [DHFs], old documentation, which were often created a while ago. This doesn’t necessarily mean they require updating, but our advice is just be mindful that guidance has changed and processes are somewhat more defined now than they used to be. So while it may seem sufficient to utilize, let’s say, an old hazard or an old risk file, it may not be the most friendly document when having to link tasks to any study protocols, for example versus having a more refined or a newer version of a use related risk assessment which is task based link to your task analysis etc. This is exactly what you need for performing your human factors validation testing.

Define a Clear Scope of Devices or System of Components To Be Testing
Now the second tip is to define a clear scope of device or system of components that you’re going to be covering. So basically, you need to have a clear plan on which devices or which systems which components of those systems, you need to be testing. Nick touched on it a little bit earlier on. Probably the best way to do this is by looking at your portfolio and prioritizing that portfolio, so looking at your best-selling device, so from a business focus. Secondly, you could look at the highest risk device first, or even devices within your portfolio with what you deem as having the weakest usability files. Now some of our clients, as Nick mentioned before, have clustered similar devices, or at least devices that are used together as a system, basically for efficiency when it comes to running projects when you’re looking at budget and time and potentially running within one study and also having strong rationales for why devices have been chosen and or not chosen. One thing to remember here is that all of these should be led by devices that have had updates to their user interface design since their release in the standard into 2015.

Have a Traceable history of Design Revisions Since 2015
That brings me on to point number three: having a traceable history of design revisions since 2015. What we need to focus on here is specifically those changes that have affected usability and user interactions—therefore the elements that we define as the user interface of the device. So basically, what this does is it covers your DHF documentation, and particularly any means of recording as a design iteration. For example, your requirements specification, ECRs, the change request, a design story. So, this is this can come in many different forms. Different clients have different ways of capturing this information. Lastly, let’s say risk files, so particularly for human factors, this would be your use related risk assessment or in whatever format that comes.

Have a Robust Human Factors Project Plan
Lastly, tip number four from our experience running study so far, I’d say make sure you have a robust Human Factors project plan for these activities. Really do plan your activity. Scale how much effort you feel you need to plan to give each of these activities enough attention. And obviously, through project planning, we’re looking at scheduling and resource—so identify who’s going to be needed, what types of personnel with what types of skill set, let’s say for instance, engineering teams, or quality teams’ risk, even sales or marketing or medical affairs. And then obviously budget. So, what you need to do here is you need to be agreeing this plan with all those stakeholders that will be involved and share the whole project objectives with these stakeholders, so you get their support and buy-in. It’s about creating an awareness within the company or the management within the company of what needs to happen, and why.