3 Important Uses of AI
In The Medical Industry
The use of AI in the MedTech industry is growing rapidly. You only need to do a quick Google search to see hundreds of interesting examples first hand.
However, talking about other people’s projects would be rather lazy and would also be a pretty uncompelling blog to write. Instead, I want to focus this blog on things we’ve actually done at Fuzzy Labs that are directly related to real projects with real commercial applications.
Then, I want to show you how these projects can be applied to the world of MedTech and why they are interesting from a medical diagnosis perspective.
Let’s dive in.
1. Wound tissue classification (using medical imaging)
Medical imaging is a broad field, and anybody who works in the MedTech industry will instantly recognise it. It includes X-rays, MRIs, CAT scans, but also photographs.
And Fuzzy Labs is particularly interested in the photos. Using AI together with medical imaging, you could take a photograph of a wound and know what kinds of tissues are present in that specific wound. This sort of information can then tell you how the wound is healing or how well it’s responding to the product treating it.
We actually built a proof of concept model on this idea. The plan was to build a mobile app for the use of clinicians, where they take a photo of a wound and see different tissue regions highlighted. Then the app would record the results to a database.
When you look at the practical use of a tool like this, you can see it has multiple applications to the MedTech industry. For example, a company with a wound treatment product might want to know how their product stacks up against the competition and could use the data collected to make that comparison.
On the other hand, it can be applied from a diagnostic perspective. In this case, you would be more interested in the context around that wound.
AI is becoming so useful in the medical space because, although AI cant out-do the medical specialists, it does provide us with scalability far beyond what humans are capable of.
Take the wound tissue classification as an example. If we can collect data across a large number of patients over time, the clinicians’ job becomes a lot easier and their decision-making becomes more informed.
It can also help mitigate human error. If the clinicians miss something, the AI model can flag it up as something worth considering. So AI isn’t going to replace the clinicians’ work. Instead, we’re providing tools that could help automate and scale up what they already do.
Other uses for medical imaging
There are many other implications of medical imaging that are worth talking about. One good example is the model Google created last year to distinguish between bacterial and viral pneumonia in X-ray images. This was a way to quickly figure out if you’re looking at pneumonia caused by a virus like COVID-19.
In this situation, the scalability that AI provides us with is invaluable.
And it doesn’t stop there. Medical imaging models, like the one we built, have all sorts of other uses. They can also be used for distinguishing things like cancer cells and other potentially harmful diseases.
2. Collecting and interpreting data
Another project that we undertook for a medical client of ours was based on collecting and reading data. They needed to collect data on the use of medical products at their clinical trial sites in order to demonstrate that their products provide a definite, tangible improvement in patient outcomes.
Then, they would use the data as a sales tool in their pitches. It is with mentioning here that this wasn’t a clinical trial, but a commercially-facing project.
The problem was that their current data was being collected manually, which makes it difficult to collect it all together so that we can create that long-term view and then report on it.
This is where something like computer vision, mostly OCR (optical character recognition) really helped. Not only did we have all the data written down by the clinicians, but we also had all the data from the photos they were taking with our app.
Now we could combine all of that together to create a detailed picture of the individual patients and their treatment results.
AI is perfect for this job because, without it, it’s very labour intensive. You need people inputting data into spreadsheets, interpreting those spreadsheets, creating reports and then interpreting those for a sales purpose.
We wanted to build a system that would automate the whole process, which could save time and money and ultimately limit the risk of human error.
So that’s what we did.
3. Edge-based AI and localising data
In the MedTech industry, there is always a need to gather more data related to medical conditions.
This is because, with new data, there are new opportunities to be had.
For example, if I have a wound dressing that can also collect temperature readings, that provides me with new information about how the wound is progressing.
I can then build some tools around that data which I can give to the clinicians to improve their treatment of the wound. This also means that the manufacturer can take a product to market that differentiates them from their competitors.
The hunt for new ways to gather data is on, and one unique opportunity emerging is in sensor technology.
Over the last few years, it’s become very inexpensive and relatively easy to integrate various sensor technologies directly into medical devices.
These devices then process that data and make inferences that can lead to actionable decisions.
Nowadays, this can be done quickly, cheaply and without compromising the patient data.
When it comes to patient data, you wouldn’t want to stream it to some server out there in the cloud. Not only is that expensive and dependent on an internet connection, but you also don’t want to put intimate data at risk.
We have become increasingly involved in something called Edge-based AI. This means building AI models but deploying them in a way that’s local to the data.
A good example of this is a project we worked on to embed a fabric with pressure sensors. If this was then placed in a wheelchair, it would allow you to look at how someone is sitting and estimate their posture using an AI model.
Then, you could turn that into an actionable decision by having the model notice if somebody’s posture looks maladaptive, which alerts a clinician.
Everyone is trying to do it
The exciting thing is that with Edge-based AI, you could do this with a piece of inexpensive hardware that lives directly on the wheelchair! The reason for creating products like this is that they are novel and have many applications across the MedTech space.
More and more now, there’s an awareness in the industry that all of these things are possible, but people aren’t quite equipped to take advantage of them yet.
However, that doesn’t mean that people aren’t trying to do it. Right now, there is a race to make the best use of this kind of technology, with big companies making huge investments in Edge-based AI capabilities.
We recently formed a partnership with Nvidia, and it shows we are serious about making bets in this field. I personally think it’s going to be massive!
So, watch this space. Very soon, you’re going to start seeing a lot more AI moving into the medical industry.
The examples in this blog are only three of hundreds of ways it can be and already is being used!
I hope you found this useful.
P.S. For more great content around AI and how it’s impacting the world today, please feel free to email any questions to me at firstname.lastname@example.org and follow us on LinkedIn for updates on all our latest projects.