Usability and Clinical Effectiveness of an Interpretable Deep Learning Framework for Post-Hepatectomy Liver Failure Prediction

Sponsor
Maastricht University (Other)
Overall Status
Not yet recruiting
CT.gov ID
NCT06031818
Collaborator
First Affiliated Hospital, Sun Yat-Sen University (Other)
80
1
1.9
41.3

Study Details

Study Description

Brief Summary

The goal of this in-silico clinical trial is to learn about the usability and clinical effectiveness of an interpretable deep learning framework (VAE-MLP) using counterfactual explanations and layerwise relevance propagation for prediction of post-hepatectomy liver failure (PHLF) in patients with hepatocellular carcinoma (HCC). The main questions it aims to answer are:

  • To investigate the usability of the VAE-MLP framework for explanation of the deep learning model.

  • To investigate the clinical effectiveness of VAE-MLP framework for prediction of post-hepatectomy liver failure in patients with hepatocellular carcinoma.

In the usability trial the clinicians and radiologists will be shown the counterfactual explanations and layerwise relevance propagation (LRP) plots to evaluate the usability of the framework.

In the clinical trial the clinicians and radiologists will make the prediction under two different conditions: with model explanation and without model explanation with a washout period of at least 14 days to evaluate the clinical effectiveness of the explanation framework.

Condition or Disease Intervention/Treatment Phase
  • Other: The explanation of deep learning framework (VAE-MLP) , including counterfactual explanations and layerwise relevance propagation
  • Other: The model prediction
  • Other: The model prediction and the explanation of deep learning framework (VAE-MLP) , including counterfactual explanations and layerwise relevance propagation

Detailed Description

Post-hepatectomy liver failure (PHLF) is a severe complication after liver resection. It is important to develop an interpretable model for predicting PHLF in order to facilitate effective collaboration with clinicians for decision-making. Two-dimensional shear wave elastography (2D-SWE) is a liver stiffness measurement (LSM) technology that was proven to be useful in liver fibrosis staging. Therefore 2D-SWE shows the potential value for liver function assessment and PHLF prediction. 2D-SWE images display color-coded tissue stiffness map of liver parenchyma, with red representing a solid tissue (higher stiffness) and blue representing a soft tissue (lower stiffness). Routine analysis of 2D-SWE fails to fully utilize all information available in the images and also suffers from inter-observer variance in choosing the optimal quantification region.

Deep learning (DL) has demonstrated state-of-the-art performance on many medical imaging tasks such as classification or segmentation. However, despite significant progress in DL, the clinical translation of DL tools has so far been limited, partially due to a lack of interpretability of models, the so-called "black box" problem. Interpretability of DL systems is important for fostering clinical trust as well as timely correcting any faulty processes in the algorithms.

Here, we present a novel interpretable DL framework (VAE-MLP) which incorporates counterfactual analysis for the explanation of 2D medical images and LRP for the explanation of feature attributions of both medical images and clinical variables.

The goal of this in-silico clinical trial is to learn about the usability and clinical effectiveness of an interpretable deep learning framework (VAE-MLP) using counterfactual explanations and layerwise relevance propagation for prediction of post-hepatectomy liver failure (PHLF) in patients with hepatocellular carcinoma. The main questions it aims to answer are:

  • To investigate the usability of the the interpretable deep learning framework (VAE-MLP) for explanation of the deep learning model.

  • To investigate the clinical effectiveness of the interpretable deep learning framework (VAE-MLP) for prediction of post-hepatectomy liver failure in patients with hepatocellular carcinoma.

In the usability trial the clinicians and radiologists will be shown the counterfactual explanations and layerwise relevance propagation plots of 6 examples. The score of the Likert scale of a designed questionnaire is used to evaluate the usability of the framework.

In the clinical trial the clinicians and radiologists will make the prediction under two different conditions: with model explanation and without model explanation with a washout period of at least 14 days. The accuracy, sensitivity and specificity is used to compare the clinical effectiveness of the explanation framework.

Study Design

Study Type:
Observational
Anticipated Enrollment :
80 participants
Observational Model:
Cohort
Time Perspective:
Retrospective
Official Title:
Usability and Clinical Effectiveness of an Interpretable Deep Learning Framework (VAE-MILP) Using Counterfactual Explanations and Layerwise Relevance Propagation Framework for Post-Hepatectomy Liver Failure Prediction
Anticipated Study Start Date :
Sep 1, 2023
Anticipated Primary Completion Date :
Sep 30, 2023
Anticipated Study Completion Date :
Oct 30, 2023

Arms and Interventions

Arm Intervention/Treatment
Patients with HCC

Patients who underwent curative liver resection for HCC in the First Affiliated Hospital of Sun Yat-Sen University in China.

Other: The explanation of deep learning framework (VAE-MLP) , including counterfactual explanations and layerwise relevance propagation
The radiologist and clinicians will be provided the model prediction results with the explanation of the model and they will fill in a questionnaire to evaluate the usability of the interpretable framework.

Other: The model prediction
The radiologist and clinicians will be provided the model prediction results without the explanation of the model and they will be asked to give their own prediction.

Other: The model prediction and the explanation of deep learning framework (VAE-MLP) , including counterfactual explanations and layerwise relevance propagation
The radiologist and clinicians will be provided the model prediction results with the explanation of the model and they will be asked to give their own prediction.

Outcome Measures

Primary Outcome Measures

  1. Clinical effectiveness of the explanation framework [From enrollment to the end of trial at 8 weeks]

    The accuracy, sensitivity and specificity will be compared between the prediction made with and without the explanation of the DL model to determine the clinical effectiveness of the explanation framework.

Secondary Outcome Measures

  1. Usability of the explanation framework [From enrollment to the end of trial at 8 weeks]

    The score of the Likert scale of a designed questionnaire is used to evaluate the usability of the framework

Eligibility Criteria

Criteria

Ages Eligible for Study:
N/A and Older
Sexes Eligible for Study:
All
Accepts Healthy Volunteers:
No
Inclusion Criteria:
  1. patients with treatment-naive and resectable HCC; 2) performance status Eastern Cooperative Oncology Group (PS) score 0-1.
Exclusion Criteria:
  1. liver resection was not performed; 2) pathological diagnosis of non-HCC; 3) failure in liver stiffness measurement defined as the elastography color map was less than 75% filled or interquartile range (IQR)/median > 30%; 4) immune-active chronic hepatitis indicated by an elevation of alanine aminotransferase (ALT) levels ≥ 2×upper limit of normal (ULN); 5) obstructive jaundice or dilated intrahepatic bile ducts with a diameter of >3 mm; 6) hypoalbuminemia, hyperbilirubinemia, or coagulopathy not related to the liver.

Contacts and Locations

Locations

Site City State Country Postal Code
1 The First Affiliated Hospital of Sun Yat-Sen University Guangzhou Guangdong China 510000

Sponsors and Collaborators

  • Maastricht University
  • First Affiliated Hospital, Sun Yat-Sen University

Investigators

  • Principal Investigator: Philippe Lambin, Maastricht University

Study Documents (Full-Text)

None provided.

More Information

Publications

None provided.
Responsible Party:
Maastricht University
ClinicalTrials.gov Identifier:
NCT06031818
Other Study ID Numbers:
  • Interpretable DL
  • 92059201
First Posted:
Sep 11, 2023
Last Update Posted:
Sep 11, 2023
Last Verified:
Aug 1, 2023
Individual Participant Data (IPD) Sharing Statement:
Undecided
Plan to Share IPD:
Undecided
Studies a U.S. FDA-regulated Drug Product:
No
Studies a U.S. FDA-regulated Device Product:
No
Keywords provided by Maastricht University
Additional relevant MeSH terms:

Study Results

No Results Posted as of Sep 11, 2023