Submission Information

MLHC invites submissions to a full, archival Research Track and a non-archival Clinical Abstracts Track. Submissions accepted to both tracks will be presented at the proceedings event; and for both tracks, at least one author is required to attend, should their work be accepted.

Both the Research Track and Clinical Abstracts Track are received through OpenReview:
https://openreview.net/group?id=mlforhc.org/MLHC/2025/Conference

Hence, all submitting authors are required to have active OpenReview profiles by the submission deadlines.

MLHC 2026 TIMELINE

  • OpenReview account creation deadline: April 3rd, 2026 (If you do not already have an OpenReview account, please register by this date, otherwise it cannot be guaranteed that your account will be activated in time.)

  • Full submission deadline (Research and Clinical Abstracts Tracks): April 17th, 2026

  • Review period: April 17th – May 15th, 2026

  • Author Rebuttal period: June 1st  – June 15th, 2026

  • Reviewer - AC discussion: June 15th – June 26th, 2026

  • Paper decision notifications: July 3rd, 2026

  • Conference dates: August 12-14th, 2026

  • For decades, experts in computer science and medical informatics have explored machine learning techniques to harness data in ways that could propel clinical medicine forward. Continued advancement of these techniques, including predictive and generative models for biomedical images and text such as LLMs, together with the growth of digital health technologies (e.g., electronic health records (EHRs), wearable devices, mobile health apps) and the involvement of tech-savvy clinicians, have led to significant, ongoing progress in applying machine learning to healthcare.

    Achieving this vision, however, requires overcoming challenges related to processing complex data such as images, sensor data, and multi-modal patient records. It also requires interpreting these data to deliver actionable insights, including support for decision-making and the quantification of causal effects, as well as examining the resulting impact on clinical workflows and healthcare more broadly. 

    Maximizing the positive impact of machine learning in healthcare requires close collaboration between a diverse set of experts, including not only technical researchers and clinical staff, but also implementation scientists, ethicists, and policy experts. This collaboration is necessary to identify key problems, curate relevant datasets, and validate findings to ensure solutions work effectively in practice. While machine learning has made progress in handling complex data, much more remains to be done, especially in transitioning from predictive and generative models to practical tools that positively impact clinical decision-making and patient welfare.

    The Machine Learning for Healthcare Conference (MLHC) serves as a leading venue dedicated to this dynamic intersection. Since its inception in 2016, MLHC has brought together thousands of researchers in machine learning and clinical fields to share pioneering work (archived in the Proceedings of Machine Learning Research), and foster new partnerships.

    MLHC’s guiding principle is that accepted papers should provide important new generalizable insights about machine learning in the context of healthcare.

    MLHC invites submissions to a full, archival Research Track and a non-archival Clinical Abstracts Track. Submissions accepted to both tracks will be presented at the proceedings event; and for both tracks, at least one author is required to attend, should their work be accepted.

    Both the Research Track and Clinical Abstracts Track are received through OpenReview:
    https://openreview.net/group?id=mlforhc.org/MLHC/2025/Conference

    For the Research Track, all submitting authors are required to have active OpenReview profiles by the submission deadlines. For the Clinical Abstracts Track, at least one author should have an active OpenReview profile.

  • The research track is organized with three main research themes:

    (i) Novel methods that tackle fundamental problems arising in healthcare data, including predictive modeling, generative AI, foundation models, multimodal data, temporal dynamics, distribution shift across populations, fairness, and causal inference.

    (ii) Implementation & Benchmarks (impact & reproducibility), including validation and real-world evaluations of ML integrated into clinical workflows; new datasets and benchmarks; works considering socio-technical and equity implications; and replication studies that provide generalizable insights to the ML-for-health community.

    MLHC is not tailored to evaluate machine learning for purely biological problems, though submissions with translational impact will still be considered; feel free to contact PCs at organizers@mlforhc.org if you are unsure whether your submission qualifies.

    Additional Context for Clinicians: We realize that conferences in medicine tend to be abstract-only, non-archival events. This is not the case for MLHC: to be a premier health and machine learning venue, all research-track papers submitted to MLHC will be rigorously peer-reviewed for scientific quality. To allow this, a suitably complete description of the work is necessary. We call for submissions that comprehensively describe the problem, cohort, features used, methods, results, and so on. Multiple reviewers will provide feedback on the submission. If accepted, you will have the opportunity to revise the paper before submitting the final version. If you wish to submit a shorter, non-archival paper, please see the Clinical Abstracts Track below.

    Additional Context for Computer Scientists: MLHC is a machine learning conference, and we expect submissions of the same level of quality as those that would be sent to a conference, rather than a workshop.

    RESEARCH TRACK REVIEW PROCESS

    All Research Track submissions will be rigorously peer-reviewed by both clinicians and ML researchers, with an emphasis on what generalizable insights the work provides about machine learning in the context of healthcare.

    At least one author from each submission will be required to review if requested by the program committee, similar to policies from other machine learning conferences. Reviewing for MLHC is double-blind: the reviewers will not know the authors’ identity and the authors will not know the reviewers’ identity.

    The Research Track goes through a double-blind peer review following an editorial decision. Preliminary desk rejects will be based on severe formatting violations (including improper use of LLMs, see LLM Use Policy below), irrelevance to MLHC topics of interest, or if the program committee deems the quality of the contribution is not on par for MLHC. 

    RESEARCH TRACK FORMAT

    • Please use the full paper LaTeX files available [here]. The example paper in the file pack contains sample sections. The margins and author block must remain the same and all papers must be in 11-point Times font. Further, you must include the generalizable insights section in the introduction. Please refer to the submission instructions, including mandatory content and tips on what makes a great MLHC paper.

    • Papers should be between 10-15 pages (excluding references and appendix); 15 pages is a hard upper limit. 

    • Any use of LLMs or other generative AI in manuscript development must follow the MLHC LLM use policy, which may be found at the end of this document.

    • Papers must be submitted blinded and completely anonymized. Do not include your names, your institution’s name, public github repositories, or any other identifying information in the initial submission. While you should make every effort to anonymize your work — e.g., write “In Doe et al. (2011), the authors…” rather than “In our previous work (Doe et al., 2011), we…” — we realize that a reviewer may be able to deduce the authors’ identities based on the previous publications or technical reports on the web. This will not be considered a violation of the double-blind reviewing policy on the author’s part.

    Violations of these policies are grounds for desk rejection.

    RESEARCH TRACK PROCEEDINGS AND PRESENTATIONS

    Accepted submissions will be published through the Proceedings of Machine Learning Research (formerly the JMLR Workshop and Proceedings Track).

    Authors of accepted papers will be invited to present a poster on their work at the conference, between 2 and 4 submissions will be selected for a spotlight oral presentation.

    At least one author of each accepted Research Track paper is required to register and present at MLHC to confirm publication to PMLR.

    Publications through PMLR are made open access without an article processing fee.

    RESEARCH TRACK DUAL SUBMISSION POLICY

    Research that has previously been published, or is under review, for an archival publication elsewhere may not be submitted. This prohibition concerns only archival publications/submissions, and does not preclude papers accepted or submitted to non-archival workshops or preprints (e.g., to arXiv). It is a violation of dual-submission policy to submit to another journal or conference a MLHC Research Track submission while under review at MLHC, or after its acceptance in the MLHC proceedings.

  • In addition to our main Research Track proceedings, we welcome the submission of clinical abstracts (up to 2 pages) to be presented in a non-archival abstract track.

    Clinical abstracts typically pitch clinical problems ripe for machine learning advances or describe translational achievements. Given that this track is designed to engage clinicians, the first or senior author must be a clinician (MD/DO, RN, etc. -> your job involves working with patients) or a clinician-in-training (i.e., currently enrolled in an MD/DO or MD/PhD program).

    Topics of interest for the clinical abstract:

    1. Methods: we seek abstracts about data sources and data analyses that resulted in new understanding and/or changes in clinical practice.

    2. Open clinical questions or interesting data sets: we encourage submissions from clinicians and clinical researchers on important directions the MLHC community should tackle together, as well as abstracts describing interesting data sources.

    3. Implementation: we seek end-to-end tools that bring data and data analysis to the clinician/bedside. This can include  abstracts describing processing tools/pipelines tailored to health data or software demos that introduce a tool of interest to machine learning researchers and/or clinicians in the community to use. These are often (but not necessarily) open source tools. 

    4. Research related to ethics, policy, fairness, and other aspects of application of machine learning for healthcare are also of interest.

    Abstracts will not be archived or indexed, but will have the opportunity to be presented as a poster and/or spotlight talk at MLHC. Abstracts that have been previously presented or are under review for an archival publication elsewhere may be submitted, but not if a full-length manuscript on the research has been published elsewhere (e.g. in a journal or archival conference proceedings). The clinical abstract track is not intended for work-in-progress by primarily computational researchers. 

    CLINICAL ABSTRACT TRACK FORMAT

    • Clinical Abstract Track submissions should be two pages or less, using the abstract template [linked here].

    • At least one author from each submission will be required to review if requested by the program committee, similar to policies in other machine learning conferences.

    • Clinical Abstract Track submissions must be submitted blinded and completely anonymized. Do not include your names, your institution’s name, or identifying information in the initial submission. While you should make every effort to anonymize your work — e.g., write “In Doe et al. (2011), the authors…” rather than “In our previous work (Doe et al., 2011), we…” — we realize that a reviewer may be able to deduce the authors’ identities based on the previous publications or technical reports on the web. This will not be considered a violation of the double-blind reviewing policy on the author’s part.

    CLINICAL ABSTRACT TRACK REVIEW PROCESS

    Reviewing for MLHC is double-blind: the reviewers will not know the authors’ identity and the authors will not know the reviewers’ identity. All clinical abstracts will be peer-reviewed by one clinician and one computational reviewer. 

    Please include sufficient detail to assess technical correctness for a computational review, and fully describe the significance of the submission in healthcare.

    CLINICAL ABSTRACT TRACK PROCEEDINGS AND PRESENTATIONS

    Abstracts will not be archived.Authors of accepted papers will be invited to present a poster on their work at the conference. Between 1-2 clinical abstracts will be selected for a spotlight oral presentation.

    CLINICAL ABSTRACT TRACK DUAL SUBMISSION POLICY

    Work in progress, work in submission, and work under review are all welcome (as long as you follow the other publication’s rules). Research that has previously been published may not be submitted.

  • LLMs may be used to assist in the scientific and paper writing process, including to generate figures and code, draft paragraphs, identify relevant references, and edit content for accuracy and clarity. However, the authors are fully and solely responsible for ensuring all content is accurate. We reserve the right to reject the paper if we identify a hallucinated reference or other clear indication that an LLM-generated error is present.

    The following uses of LLMs are prohibited:

    • generating scientific content or paper content without subsequent human review

    • using prompt injection to manipulate the review system

    The following uses of LLMs are allowed, but the authors are fully responsible for all content:

    • brainstorming or feedback on methodology, experimental design, or other aspects of the scientific content

    • identifying or summarizing relevant references

    • generating code used in experiments or figure generation

    • generating images used in figures

    • drafting or editing paper content