Title here
Summary here
INSPECT releases the weights and code of a 143M-parameter foundation model with Time-To-Event objective pretrained on 2.57 million deidentified EHRs from Stanford Hospital.
The foundation model is based on the MOTOR architecture (Steinberg et al. 2023), which uses a next token prediction task to learn representations for patients.
These assets will be publicly shared in coming months, under a standard research Data Usage Agreement. We will update this page once the model is available.
For more information, please read the original INSPECT paper.
For questions and feedback, please open an Issue on Github