By Stephanie Sargent, Commercial Manager
It’s often the big question when discussing the data from wearables, and so it should be – ‘is it validated?’
This is vital for several reasons. Study teams need to consider regulators, patient population groups, therapeutic areas and traceability on generating the most accurate and reliable data possible within a clinical trial. So, what is ‘validation’?
The Digital Medicine Society (DiME) provides a useful definition and tool, known as the V3 framework – which encompasses verification, analytical validation, and clinical validation (V3). The three-component V3 framework combines established practices from both software and clinical development to establish the shared foundation for evaluating whether digital clinical measures are fit-for-purpose.
At Activinsights, we are proud of over a decade’s heritage in public health research across 200+ global institutions, reaching 40+ countries. Our focus in the last couple of years has been to bring learnings across from the research sector into the clinical trials landscape for relevant and meaningful digital measures for both the patient and clinician – where there are now 100 + clinical trials operating with Activinsights’ technologies across 20+ therapeutic areas.
A breadth of knowledge around validation in different population groups and digital measures can be translated into digital clinical measures from peer-reviewed public health research. An example includes Activinsights' supporting metrics in the objective monitoring of fidgeting behaviours, where early work has been done in children with Autism Spectrum Disorder. This knowledge in algorithm development may also be relevant for dementia trials, as an increase in night-time wandering behaviours has been identified as a biomarker for disease progression. This is where continuous remote monitoring comes into its own for valuable insights on objective lifestyle and disease progression alongside drug efficacy, outside of clinic visits.
Throughout the trial, it remains key that collecting objective data is unobtrusive and of low participant burden. Another example of the benefit of algorithm development from peer-reviewed, open-source raw data formats is understanding steps in more detail for specific therapeutic areas. Steps appears to be a commonly requested measure within clinical trials, likely because it’s easily interpretable across both clinical and patient population groups.
So, the client says: ‘Do you have a step measure available and is validated?’
Activinsights' reply: ‘Yes…’
However, it’s not a one-size fits all approach. We have validated steps measures; but ultimately it depends on the population group and therapeutic area we’re talking about. Not all steps are measured the same. A healthy child may take the same number of total steps as a Parkinson’s patient, but how often and when, along with the gait information around those steps could be dramatically different. It demonstrates how inextricably linked algorithm development and validation are, and it is not as simple as selecting an ‘off-the-shelf algorithm’ that is validated for a one-size-fits-all approach within clinical trials.
DiME have recently highlighted the challenge that different measures presents across the clinical trials industry, and the need for a core set of digital clinical measures that are accepted with evidence as a starting point. It highlights the importance of removing vendor biases with black-box algorithms, hence the trend of working in an open-source environment with reproducible raw data sets, like the GENEActiv, to drive the sector forward.