Glimpsing into the Future of 3D Printing at Big Data Toronto 2017

Article by Rod Mackay updated June 29, 2017

Article

I attended the Big Data Toronto 2017 conference on behalf of Javelin this year. It was the second installment of this conference and I was truly amazed at the quality of the event. What really blew me away is how Toronto is becoming a hub for big data and artificial intelligence. I attended with the purpose of discovering the latest techniques in machine learning and deep neural networks and how they apply to 3D printing. This blog will break down how artificial intelligence will directly affect 3D printed patient specific medicine.

Background on AI

Let’s start by describing what artificial intelligence is with regards to image recognition. Our end goal while working with radiologists and surgeons is to take MRI scans and effortlessly segment out the organ or bone in question. To do this manually takes hours upon hours of work, whereas if we teach a computer to ‘recognize’ what a heart looks like, build a software package to search for hearts, train the software to identify hearts vs. non-heart organs, then we could autonomously search for the organs in question (heart) without lifting a finger. Best of all, if you can train the computer to search for one organ successfully, then you can use the same software package to search for any organ depending on how you train the computer. Sounds a little wild at first, but when you dive into the problem it becomes quite clear. I will get to the 3D printing aspect towards the end of this post.

Deep Neural Networks for Image Recognition

I created an image recognition software package using Python (computer coding platform) and a technique that uses deep neural networks. In essence, I took 10,000 images that are of the object of choice. They could be MRI scans of hearts, cars, dogs, cats, you name it. After I have those images I then take 10,000 images that are not the object of choice ie. MRI scans of not-hearts (maybe lungs), not-cars (pictures of roads without cars), not-dogs, and not-cats, etc.

Big Data Toronto 2017

MRI scan of entire chest cavity – note, we want to keep the heart and discard the rest.

Big Data Toronto 2017

Big Data Toronto 2017

To keep things somewhat simple, I then ‘train’ the computer to understand when an image of a heart is presented and it registers a 1, not-hearts would be a 0. How does the computer know if the image is a heart or a not-heart, you may wonder? This is the power of a deep neural network. I used Python to create a piece of software that takes an input image, scans the image pixel by pixel and evaluates the image as a whole to determine if it’s a heart or not-heart. The computer will loop through this software package multiple times to essentially train itself on what a heart looks like. This is the exact same way our brains learn how to recognize images. We see thousands of dogs in our lifetime so we can determine that images of a Golden Retriever and an Irish Wolfhound are both dogs. With years of exposure to what dogs look like, we become more effective at seeing dogs and predicting that they are a dog and not a wolf, for example. My software does the exact same method of iteration and repetitive training. My code is a few thousand lines long, however, the deep neural network portion is only a handful of lines long and is where all the training happens. Ultimately, my code yields a 99.3% confidence rate that it can detect correct images from incorrect images. You can see a screenshot below.

Big Data Toronto 2017

Deep Neural Network – Developed in Python 3.5, using Tensorflow packages and Dropout techniques. The package was looped 10 times and yielded a 99.3% accuracy on detecting images of interest.

Artificial Intelligence in 3D Printing

The picture above and the paragraph before it explains how artificial intelligence as we know it today works. It’s a buzz word in the media but trust me, it works and it is extremely powerful! To apply it back to the world of 3D printing we can then take thousands of MRI scans with the goal of identifying hearts. Once the hearts have been identified we can cut away and segment the remainder of the scans so we are only left with the scans of the hearts. We then convert them to printable files and BOOM, we hit print and the surgeons have patient-specific 3D printed models in hand, ready for practice surgeries. Without the advancements of artificial intelligence, the acceleration of patient specific medicine would be stalled. Now, however, surgeons and radiologists no longer need to sit around a computer rendering files to print them. It’s mainly automated and taking 3D printing one step closer to fixing incurable illnesses.

While at Big Data Toronto 2017, I attended multiple talks regarding patient-specific data from some of the worlds leading tech companies. IBM, UBER, SAS, and many others are leveraging artificial intelligence with deep neural networks, but SAS jumped out by focusing on patient-specific health care. Their software engineers are using the methods described in this blog to help streamline the workflow from MRI scan to printed heart. It’s accelerating medicine and yielding higher surgery success rates. I couldn’t be more excited about how AI is pushing the limits of today’s 3D printing. I learned a ton at Big Data Toronto 2017 and cannot wait to leverage this new-found knowledge and apply it with Javelin’s Stratasys 3D printers. Stay tuned because 2017-2018 is going to be a fast-paced year where 3D printing accelerates like never before.

Related Links

Want to get started with 3D Printing?

Our 3D Printing resources can help you to:

Find Related Content by TAG:

Rod Mackay

Rod has been using 3D CAD software for over 25 years and has trained thousands of designers to use their CAD systems more effectively. Rod is the Javelin Webmaster and is based in Ottawa, ON., Canada.