Protecting patient privacy is essential in MRI research projects
Close, a member of the National Imaging Facility at the University of Sydney, presented an overview of issues to consider throughout the lifecycle of MRI research projects, spanning a wide range of security topics, from transfer encryption to from DICOM data to structural MRI degradation, to disclose the risks in the published results.
Mainly, he proposed the use of the “Five Safes” method to ensure confidentiality in MRI research: safe project, safe people, safe environment, safe data and safe exit.
“We seek to apply multiple levels of privacy controls that minimize identifiable information and maximize data security,” he said.
Close has asked – and answered – a number of questions related to the security of MRI data.
Q: Is the use of data in the design of the research project appropriate? Are the risks incurred justifiable?
The safety of a research project is usually ensured by your institution’s Human Research Ethics Board (HREC) and involves a risk analysis that assesses the likelihood that a subject will be identified from data versus the impact of a breach of privacy.
When the potential impact of the privacy breach is low, but the chance of identifying a sample of names in the metadata associated with the RM images is likely, a high level of security should be applied, as standard. ISO 27001, according to Close.
On the other hand, if the impact of a possible breach is small and it is unlikely that a person can be identified, it becomes more likely that the dataset can be made public. HREC risk analyzes can place studies in an intermediate zone, where it may be appropriate to store data on standard research infrastructure with moderate levels of security.
“Ultimately, the principal investigator [PI] is responsible for data confidentiality. However, institutions should provide the means to make this achievable for PI, ”said Mr. Close.
Q: Can the user be trusted to use the data appropriately? Are the people with access to the data authorized?
“It basically means verifying that the user is who they say they are,” Close said.
For people accessing data, institutional accounts, such as those in universities or hospital networks, are preferred because they are actually linked to the official identity of the person, as opposed to email or Facebook accounts, for example. In addition, they provide support for multi-factor authentication within the institution and dedicated help desks for password recovery, for example.
Close recommends using data sharing platforms specifically designed for data imaging, such as XNAT, LORIS, COINS, or commercial provider Flywheel.io. These platforms are effective because they store the data in a central location, reducing the proliferation of unsecured copies of images or other data if someone involved in the study takes a copy and puts it on their own. personal computer, he said.
Q: Does the installation limit unauthorized use or errors? Has adequate and sufficient protection been applied to the data?
For those who aren’t IT security experts, Close said their top tip when it comes to securing your data is to leave it to the experts.
“Doing it yourself is difficult to say the least,” he said.
Hackers with access to the physical storage infrastructure or to your device can easily gain unauthorized access to the data stored there, unless it is encrypted. If you are using a cloud provider or an open stack system, be sure to set your firewall rules correctly. Of course, use encryption on data at rest, but don’t rely on it because it actually only protects a physical layer, Close said.
An important aspect to be aware of when transferring DICOM data between sites over the Internet is that many instruments do not support encrypted transfer. You want to use a virtual private network (VPN) tunnel for transferring data between sites to make sure the data is encrypted, he said.
You can also use the Management Information Report Control (MIRC) clinical trial processor (CTP), which uses HTTPS to transfer data. As the name suggests, Close says, CTPs are approved for clinical trials.
“It’s an added bonus,” he said.
Q: Does the data itself contain enough information to allow a breach of confidentiality? Is there a risk of disclosure in the data itself?
The first step in reducing the risk of disclosure is to remove as much sensitive metadata as possible. DICOM data typically contains a little Protected Health Information (PHI). Unfortunately, the DICOM standard does not include a complete list of the various fields used to store PHIs.
It’s safer to employ an explicit inclusion policy for fields required for your analysis rather than explicitly excluding fields in case you run out of them, Close said. A good option for neuroimaging data is Brain Imaging Data Structure (BIDS), which is based on a minimum set of required data.
Also, there are a number of ways to edit DICOM metadata to remove PHI, Close said. Consider DICOM editing tools like DcmTK and Pydicom, for example, which are fully automated and designed to help you avoid human error.
When it comes to facial data in MR images, according to a 2009 HIPAA ruling, high-resolution structural MRI data sets are comparable to frontal photographs. Volume rendering software is available free of charge.
The safest method is to make a skull band, Cole said. The blur of the face can be reversed. Degradation methods and tools include afni_refacer, mridefacer, pydeface, or quickshear.
Degradation is required for public datasets, although the original should probably be kept in secure storage for reference. AFNI Refacer may be suitable for automatic processing, according to Cole.
In addition, degradation has minimal impact on subsequent pre-processing, he said.
Q: Will the results of the data lead to disclosure?
Close warned that one of the ways the research results can lead to disclosure is called a link attack, where multiple sets of data are combined to refine the identity of individuals in the study.
For example, you can have data about the time and place of a scan and the hospital where it was performed in the scan metadata, which an attacker can link with other geolocation data for possibly identify the person in the analysis. The risk of these predatory attacks increases with the amount of metadata included in the dataset, Close said.
Ultimately, make sure you’re using well-tested infrastructure and platforms where possible, and “strip” your data before you publish it, he advised.
“The setup you use will depend on the consent of the subject used in the study and the type of project you are working on,” he concluded.
Copyright © 2021 AuntMinnie.com