An expert panel found that 16 freely accessible online tests for Alzheimer's disease scored poorly on scales of overall scientific validity, reliability and ethical factors, according to new data reported today at the Alzheimer's Association International Conference® 2013 (AAIC® 2013) in Boston.

"As many as 80 percent of Internet users, including a growing proportion of older adults, seek health information and diagnoses online," said Julie Robillard, Ph.D., a postdoctoral fellow at the National Core for Neuroethics at the University of British Columbia in Vancouver, British Columbia, Canada, who presented the data at AAIC 2013.

"Self-diagnosis behavior in particular is increasingly popular online, and freely accessible quizzes that call themselves 'tests' for Alzheimer's are available on the Internet. However, little is known about the scientific validity and reliability of these offerings and ethics-related factors including research and commercial conflict of interest, confidentiality and consent. Frankly, what we found online was distressing and potentially harmful," Robillard added.

According to the Alzheimer's Association 2013 Alzheimer's Disease Facts and Figures report, more than 5 million Americans are living with Alzheimer's disease. By 2050, the number of people with Alzheimer's could reach 13.8 million. Other estimates suggest that number could be high as 16 million.

"The number of people with Alzheimer's is projected to rise significantly as more and more people age into greater risk for developing the disease," said Maria Carrillo, Ph.D., Alzheimer's Association vice president of medical and scientific Relations. "Especially in that context, active promotion of healthy aging is a priority for the Alzheimer's Association, as is the delivery of accurate, reliable and ethical information and services."

Alzheimer's disease is the sixth-leading cause of death in the United States and is the only leading cause of death without a way to prevent, cure or even slow its progression.

Robillard and colleagues at the University of British Columbia used information-mining techniques to retrieve 16 online tests for Alzheimer disease. Unique monthly visitors for the parent sites hosting the online tests ranged from 800 to 8.8 million.

A panel of experts including geriatricians, human-computer interaction specialists, neuropsychologists and neuroethicists reviewed the tests, specifically evaluating the scientific validity and reliability of the assessments, their human-computer interaction features and ethics-related factors. The tests were evaluated on a scale from 1 (very poor) to 10 (excellent).

The researchers found that most of the tests (12 of 16) scored "poor" or "very poor" for overall scientific validity and reliability. These tests "are not useful for the diagnosis of Alzheimer's disease," Robillard said.

All 16 tests scored "poor" or "very poor" on the evaluation criteria for ethical factors. According to Robillard, ethical issues with the tests included overly dense or absent confidentiality and privacy policies, failure to disclose commercial conflicts of interests, failure to meet the stated scope of the test and failure to word the test outcomes in an appropriate and ethical manner.

The majority of tests (10 of the 16) scored "fair" for appropriateness of human-computer interface for an older adult population. According to the researchers, this suggests that the visual aspects of the tests and the motor tasks required would be suitable for older users.

"Freely accessible diagnostic tests that lack scientific validity and conform poorly to guidelines around consent, conflict of interest and other ethical considerations have the potential to harm a vulnerable population and negatively impact their health," Robillard said. "Further evidence and informed policy are needed to promote the greatest benefits from tools and information available on the Internet."

Some examples of evaluation criteria used by the expert panel were:

  • For validity and reliability:
  • Are the content and breadth appropriate to achieve test claims?
  • Are test questions based on current, peer-reviewed evidence?
  • Would the test have test-retest reliability?
  • For user interface:
  • Are the instructions clear and easy to understand?
  • Is the test visually adequate (font size, contrast, etc.)?
  • Does the test take varying levels of computer knowledge into consideration?
  • For ethics:
  • Are issues of privacy and data collection discussed?
  • Are conflicts of interests clearly stated?
  • Is the wording of the outcomes ethically appropriate?