Converting MRI scans into 3D-printed models for surgical planning

Researchers at MIT and Boston Children’s Hospital have developed a system that can take MRI scans of a patient’s heart and convert them into a physical model that surgeons can use to plan surgery.

The models could provide a more intuitive way for surgeons to assess and prepare for the anatomical idiosyncrasies of individual patients. “Our collaborators are convinced that this will make a difference,” says Polina Golland, a professor of electrical engineering and computer science at MIT, who led the project. “The phrase I heard is that ‘surgeons see with their hands,’ that the perception is in the touch.”

Determining the boundaries between distinct objects in an MRI image is one of the central problems in computer vision, known as ‘image segmentation.’ But general-purpose image-segmentation algorithms aren’t reliable enough to produce the precise models that surgical planning requires.

Typically, the way to make an image-segmentation algorithm more precise is to augment it with a generic model of the object to be segmented. Human hearts have chambers and blood vessels that are usually in roughly the same places relative to each other. That anatomical consistency could give a segmentation algorithm a way to weed out improbable conclusions about object boundaries. However, most cardiac patients require surgery precisely because the anatomy of their hearts is irregular.

In the past, researchers have produced printable models of the heart by manually indicating boundaries in MRI scans. But with around 200 cross sections in one of these high-precision scans, it can take 8 to 10 hours to process.

“They want to bring the kids in for scanning and spend probably a day or two doing planning of how exactly they’re going to operate,” Golland says. “If it takes another day just to process the images, it becomes unwieldy.”

Golland’s solution was to ask a human expert to identify boundaries in a few of the cross sections and allow algorithms to take over from there. Their strongest results came when they asked the expert to segment one-ninth of the total area of each cross section.

In that case, segmenting 14 patches and letting the algorithm infer the rest yielded 90% agreement with expert segmentation of the entire collection of 200 cross sections. Human segmentation of three patches yielded 80% agreement.

Together, human segmentation of sample patches and the algorithmic generation of a digital, 3D heart model takes about an hour, with the 3D-printing process taking a couple of hours more.

Currently, the algorithm examines patches of unsegmented cross sections and looks for similar features in the nearest segmented cross sections. But Golland believes that performance might be improved if it also examined patches that ran obliquely across several cross sections. This and other variations on the algorithm are the subject of ongoing research.