Blog. Immerse yourself in AI

Medical AI – conducting healthcare computer vision projects when the data doesn’t exist

Bartosz

Bartosz Silski
Image Processing Engineer
03. May 2022

Computer vision has been at the forefront of interest in healthcare AI for many years. Accurate organs, tissues, and abnormalities segmentation and subsequent visualization enhance the diagnostics value of the data and facilitate many areas of the healthcare process like surgery planning, patient education, and collaboration.

Object detection can help in a better navigation during the surgery or provide early warning system for example when the tissue damage is detected or when certain areas in surgical space are infringed.

To create such accurate and successful AI solutions the good quality data and annotations are fundamental. However current data protection regulations make data acquisition troublesome. Even in areas where data can be conveniently anonymized and all the patient information erased, the length of the procedures of data access acceptance can put projects on hold.

Simulations and synthetic data as a solution

In situations where data is scarce or difficult to obtain theBlue.ai uses simulation and synthetic data as a solution. The possibilities offered by artificial data creation and its usage are practically unlimited.

In healthcare context, synthetic data are the data that do not involve real patients. Patients, their appearance, or their imaging data are not recorded, stored, or used in any way for the purpose of AI model development. Instead, realistic artificial data can be produced within digital simulations.

TheBlue.ai created and used synthetic data in multiple healthcare-related projects. One of the simplest scenarios concerned creating a dataset for detection of surgical markings. Such markings are important for delineation of the surgical area and performing safe surgeries. Real-life dataset collection is time-consuming and does not always provide enough variance to prevent models from overfitting.

TheBlue.ai addressed this concern by developing fully adjustable skin texture which can be parameterized with respect to the background, color, shadows, etc. Markings were also randomized with different brush parameters. Additional noise, shadow, or blur were added to the data as well to reflect camera capture imperfections.

Our experiments both in-house and in the operation room proved a very good model performance. But potential use cases of such a model go beyond simple detection of markings. The solution can be applied as a tool for markings validation for practicing doctors or nurses as well as for cross-hospital markings procedures unification.

Computer Vision

Pic. 1: Example of synthetic skin with cross marking

In house experiment of surgery markings detection

Pic. 2: In house experiment of surgery markings detection

Very often project requirements go beyond such simple uses. The other end of the complexity spectrum involves scenarios for simulation of human movements or human interactions. Such simulations can be used as input data for behavior monitoring or detection of specific events. With a controlled synthetic environment, more data representations than just color frames can be created. Depth maps or point clouds can be generated and further improve our understanding of the scene.

In a recent project theBlue.ai focused on the recognition of movements based on rich 3D data and extended this functionality to the detection of adverse events in hospitals like suspicious patient behavior, fallings, or seizures. Again – real-life data depicting such events are scarce or difficult to obtain. TheBlue.ai created the synthetic environment in which specific movements could be defined. Such an environment enables an introduction of large variance of the human poses, export of different data representations, randomization of the number of people in the scene, people proximity, and much more.

To close the gap between virtual and natural world data during the time of AI algorithm operation theBlue.ai takes care not only of randomizing the simulated humans but the scenery as well. The lighting conditions, objects in the scene, their number, position, texture, or color are adjusted to force AI models to capture the real-world diversity even better.

Point cloud generated from human movement simulator
Pic. 3: Scene from human movement simulator
Scene from human movement simulator
Pic. 4: Point cloud generated from human movement simulator
Automatic annotation of synthetic data with human body pose
Pic. 5: Automatic annotation of synthetic data with human body pose

Would you like to learn more about Computer Vision?
We can help you get started with the topic.
Let’s schedule an appointment