Top

First DeepLearning healthcare functionality approved because it matched humans

May 28, 2017

Category:

AI-based technologies are the next big wave in what tech progress is concerned. IBM’s Watson and Google’s DeepMind lead the way in terms of valid, commercial products. This means that subsequent products manage to actually reach the market. These results serve both as a showcase of the AI-involved companies, as well as a proof of what Artificial Intelligence enables in terms of unmatched-before capabilities. DeepLeaning healthcare, DeepLearning entertainment, Watson cyber-security – get used to these terms, as we’ll be hearing about them more and more.

Marking an important moment in DeepLearning healthcare

Recently, the FDA approved the first ever clinical such application. The cloud-based imaging platform comes from Arterys. It serves in diagnosing heart problems. By computing precedent cases, this self-teaching neural network is able to compare and analyze new cases. It managed to provide results deemed as accurate as those coming from specialists, therefore it passed the approval stage.

As Forbes quotes one of the platform founders (Fabien Beckers), this “sets a precedent for what can be done.”

How does the Arterys platform work?

The intelligent software fed on 1,000 cases, as training data. It went through a supervised algorithm-based training. The result consisted of over 10 million rules, ready to be applied in analyzing new images. This tool should function to perfection, in such a way that human intervention is no longer needed during its task performing.

In order to improve its product, the Arterys company previously partnered with GE healthcare “to combine its quantification and medical imaging technology with GE Healthcare’s magnetic resonance (MR) cardiac solutions”. The added benefit in AI medical imaging would be that, with their system, Arterys can perform cardiac assessments in a fraction of the time needed for conventional cardiac MR scans.

In order to be able to take on what is considered to be a tedious task for human physicians, the platform needs the MRI machine to feed it with images. This Cardio DL system successfully replaced human monitoring so far. FDA compared its behavior and results against its indication considerations – and the platform passed the test.

The results are not completely fault-proof, and that is not the idea, either. The goal in validating AI healthcare products for now seems to be for these tools to remain within “an expected error range comparable to that of an experienced clinical annotator“.

How does an efficient AI medical assistant look like?

When it comes to introducing machine learning into healthcare, these next-gen tools have to fulfill a few essential tasks.

First, these tools need to acquire an extended and relevant database. At this task machines do have the potential to surpass humans. They only need to remain plugged in and connected to the right data feed.

Secondly, it is a matter of processing the gathered information, via algorithms. As we’ve seen above with Cardio DL, human specialists supervise this stage. As trained, the machine organizes the data into reasoning, via patterns, internal analysis, detection criteria and others. Here it is difficult to say whether an experienced, talented physician would stand the AI competition or not.

Many are skeptical when crediting current AI performances above highly-trained professionals. Nevertheless, highly-trained professionals don’t come cheap, therefore the AI equivalent in data computation would probably be an average physician. Apparently, Cardio DL matched the average specialized human in relevantly processing the required medical data.

Thirdly, based on the previous operations, an AI medical assistant issues results. When the first two stages are as much fail-proof as possible, is is only logical that in any typical medical case the intelligent machines should be able to provide accurate results.

What do AI-based medical tools (still) lack?

Unlike humans, machines are not (yet) capable to fetch data for themselves – upon need. They depend on the information feed that delivers preset data.

Another instance of the same issue is that artificial intelligence, even when it is capable of self-teaching, cannot actually think. It can mimic thinking, it even inherits the human brain biases – unfortunately. Yet it cannot think out of the box and it cannot solicit advice. It would be interesting to test the ability of evolved AI tools to correctly revise their results and admit faults – if any. Many people find this difficult, how about allegedly faultless machines?

Coming back from overall AI dilemmas to medical AI, for the moment such tools lack essential guarantees. In order to trust them with human lives, they should first prove cyber-security infallibility and exclude any possibility of tampering with their initial flawless programming. A lot of pressure in one sentence – but yes; when a human doctor tries his/her best and fails, it is a fact of life. When a machine does the same, something seems different.