GitHub is where people build software. More than 100 million people use GitHub to discover, fork, and contribute to over 420 million projects.
A Two-Stream Deep Model is proposed for Automated ICD-9 Code Prediction in an Intensive Care Unit. • Multi-Stream Network Boost: Employing multi-stream network enhances ICD code prediction performance. • Tailored Stream Differentiation: Customizing streams for each ICD code optimizes predictio...
the attention weights from its top layer give hints on how the models allocate importance for input tokens. in specific, the feature vector for code prediction is the weighted average of all embeddings of input tokens from the top layer. higher attention weights mean more important roles of corr...
Ontological attention ensembles for capturing semantic concepts in ICD code prediction from clinical textdoi:10.18653/V1/D19-6220Matús FalisMaciej PajakAneta LisowskaPatrick SchrempfLucas DeckersShadia MikhaelSotirios A. TsaftarisAlison O'NeilAssociation for Computational LinguisticsEmpirical Methods in ...
(MLM) and Next Sentence Prediction (NSP), mimicking the tasks used in the initial pre-training of BERT12. For MLM training, 15% of the words within a given clinical narrative were masked randomly across the entire training dataset. The model was tasked to substitute the masked word with a...
ICISS is rapidly becoming the trauma score of choice for mortality prediction and quality improvement processes and this trend will likely continue as ICD-10-CM becomes available.37 View chapterPurchase book Read full chapter URL: https://www.sciencedirect.com/science/article/pii/B...
We designed coding tasks with different code frequency thresholds. In terms of each coding task, we accomplished code prediction by both feature-based methods and fine-tuning BERT variants, and evaluated coding performance on test data. Details of the feature extraction methods are given below. Fig...
The XLM model was fine-tuned on English training data, and tested for ICD-10 codes prediction in an Italian test set, showing very promising results, and could be very useful for cases of low-resource languages. Yu, et al. (2019) [18] used the BioBERT model for automatic annotation of...
The XLM model was fine-tuned on English training data, and tested for ICD-10 codes prediction in an Italian test set, showing very promising results, and could be very useful for cases of low-resource languages. Yu, et al. (2019) [18] used the BioBERT model for automatic annotation of...
Figure 10a6–c6) show the effect of using positive clicks to make the houses not fully predicted obtain the full prediction, which shows that a positive click can effectively fill in the prediction vacancy; Figure 10b6,d6 show the effect of partial negative-click segmentation on the glued ...