Elodie Germani1,, Nikhil Baghwat2, Mathieu Dugre3, Remi Gau2, Albert Montillo4, Kevin Nguyen4, Andrzej Soko lowski3, Madeleine Sharp2, Jean-Baptiste Poline2,+,Tristan Glatard3,+1- Univ Rennes, Inria, CNRS, Inserm, France2- Department of Neurology and Neurosurgery, McGill University, Montreal,Canada3- Department of Computer Science and Software Engineering, ConcordiaUniversity, Montreal, Canada4- Lyda Hill Department of Bioinformatics, University of Texas SouthwesternMedical Center, Dallas, USA+ Equal contributions elodie.germani@irisa.frAbstractParkinson’s disease (PD) is a common neurodegenerative disorder with a poorly understood physiopathology. In clinical practice, challenges are encountered in the diagnosis of early stages and in the prediction of the disease progression due to the absence of established biomarkers. Several biomarkers obtained using neuroimaging techniques such as functional Magnetic Resonance Imaging (fMRI) have been studied recently. However, the reliability and generalizability of neuroimaging-based measurements are susceptible to several different sources of variability, including those introduced by different analysis methods or population sampling. In this context, an evaluation of the robustness of such biomarkers is essential. This study is part of a larger project investigating the replicability of potential neuroimaging biomarkers of PD. Here, we attempt to reproduce (same data, same method) and replicate (different data or method) the models described in [1] to predict individual’s PD current state and progression using demographic, clinical and neuroimaging features (fALFF and ReHo extracted from resting-state fMRI). Weused the Parkinson’s Progression Markers Initiative dataset (PPMI, ppmi-info.org), as in [1] and tried to reproduce the original cohort, imaging features and machine learning models as closely as possible using the information available in the paper and the code. We also investigated methodological variations in cohort selection, feature extraction pipelines and sets of input features. Using the reproduction workflow, we managed to obtain better than chance performance for all our models (R2 > 0), but this performance remained very different from the ones reported in the original study. The challenges encountered while reproducing and replicating the original work are likely explained by the complexity of neuroimaging studies, in particular in clinical settings. We provide recommendations to facilitate the reproducibility of such studies in the future, for instance with the use of version control tools, standardization of pipelines and publication of analysis code and derived data.
# Search
curl -X POST "https://search.dria.co/hnsw/search" \
-H "x-api-key: <YOUR_API_KEY>" \
-H "Content-Type: application/json" \
-d '{"rerank": true, "top_n": 10, "contract_id": "TFwQuNHhEXMtkG1dE6C_RbIlBrIigtExMQh7kuijP5k", "query": "What is alexanDRIA library?"}'
# Query
curl -X POST "https://search.dria.co/hnsw/query" \
-H "x-api-key: <YOUR_API_KEY>" \
-H "Content-Type: application/json" \
-d '{"vector": [0.123, 0.5236], "top_n": 10, "contract_id": "TFwQuNHhEXMtkG1dE6C_RbIlBrIigtExMQh7kuijP5k", "level": 2}'