Clinical Abstracts

Clinical abstracts are meant to be short descriptions of a real open clinical problem, a prototype, or a deployment and should have a clinician as the first author.  We also welcome demonstrations of visualizations and other user-facing tools that are outside the scope of machine learning in healthcare but related to its use. We will reject clinical abstracts that are simply a machine learning/statistical analysis of health data.  (There are other venues for that.)

Full Papers

This document is intended to provide guidance -- both for authors and reviewers -- about what MLHC is looking for in papers.  (Recall that our mission, with regard to publications, is to be a top-quality venue for work at the intersection of machine learning and healthcare, to provide a space for technical contributions that impact real clinical problems.)

Blind your paper.

This sounds basic, but we’re seeing a large number of submissions with author names -- either in the author block or the headers.  Going forward, this will be grounds for automatic rejection. Don’t include your info! (Note: we understand that sometimes the cohort description makes it relatively easy to guess the authors.  We’re not talking about that here, we’re talking about actually including your name as an author.)

Make the Clinical Relevance clear.   

A large number of papers failed to meet our bar because the clinical reviewer could not identify the clinical relevance of the work.  Sometimes, this was because the clinical relevance was quite limited: the authors were trying to solve a problem that wouldn’t help with improving healthcare, even down the road.  Much more often, it seemed to us, the authors failed to describe how the subproblem they were solving would help solve an important clinical problem in terms that a clinician could follow. Applying a machine learning method to a clinical data set is not sufficient to establish clinical relevance.  All research involves solving subproblems, that’s how we move science forward. However, it’s essential that the subproblem be convincingly connected to the broader context.

Example: Missing data is not a clinical problem.  Making a certain kind of prediction or decision that might impact care, in the presence of missing data, might be a clinical problem!

Relatedly, the paper must include sufficient detail for the clinician to feel confident about the quality of the evaluation.  That includes details about how the cohort (including cases and controls) were chosen, pre-processing details such as how missing data and censoring were handled, and choices of evaluation measures.  (Recall that our clinical reviewers come from all areas of medicine, and may not be experts in the specialty of your work.)

You should have a subsection in the introduction labeled “Clinical Relevance.”

Make the Technical Significance clear.

Level of Technical Innovation.  Many submissions applied state-of-the-art/existing techniques to a clinical problem.  Please note that if you do this, the bar for purely application papers MLHC is high: we expect to see some kind of prototype/qualitative pilot study, deployment, or real scientific discovery.  Being able to make slightly better predictions on an existing data set is not sufficient. It’s also insufficient that you were the first to apply fancy existing ML to your clinical problem, without a clear connection to how the better predictions make a significant clinical difference.  As an example of an application paper we loved, take a look at: http://proceedings.mlr.press/v56/Quinn16.pdf

We note that there are many excellent venues for applying ML to clinical problems (AMIA, clinical journals), and those may be a better home for these works.  Most MLHC papers contain some technical innovation - it may be modest, but there was something methodologically novel that was required to solve the problem.

Quality of Technical Description.  Among papers that did have some technical innovation, the most common failure mode was insufficient description.  Recall that MLHC has computational as well as clinical reviewers; while it is essential that the main ideas of both the problem and method, as well as the analysis and discussion, are presented in a way that clinicians can understand them, it is also essential that there exist sufficient technical details that a computational reviewer can feel confident about the quality of the work.  Thus, perhaps in appendices, include your technical details - how high dimensional were your feature sets?  What kinds of optimization/sampling methods were used? How were hyperparameters tuned?

It’s fine to reference other papers for details if certain parts are standard, but just make absolutely clear what you did!  

Evaluation.  Finally, there were a group of papers that seemed to be missing baselines and evaluation.  Certain areas of machine learning are fairly crowded: there has been a lot of work done on making predictions with and of missing data, on using CNNs for interpreting images.  In such cases, it is essential that the authors are aware of this earlier work, compare to it, and convince us that they are not reinventing the wheel.  A good MLHC paper will include (a) ablation studies to understand why their innovation helps, if there are multiple aspects to the innovation, (b) comparisons to strong baselines/cite previously published numbers on the same data sets to convince us of the quantitative value-add, and (c) some combination of detailed post-hoc analyses/qualitative studies/pilots that provide insight into how and when the innovation helps solve a real problem.  Specifically, we all know that health data is extremely messy.  Your evaluation should convince us that you are not just fitting to the noise.

You should have a subsection in the introduction labeled “Technical Significance.”

By Publication: Be fully in compliance with IRBs, DUAs, etc.

This sounds basic, but your paper should be in compliance with IRBs, DUAs, etc. that are relevant to your data and your institution.