Posted on Leave a comment

Do it yourself? When the researcher becomes the subject

Poldrack in scanner
Stanford psychologist Russell Poldrack, shown here in his 105th MRI scanning session during an 18-month experiment, is one of a number of researchers who are enlisting as subjects in their own studies. TIM LAUMANN

By Esther LandhuisDec. 5, 2016 , 12:30 PM
Source: Science Magazine
Photos Source: Science Magazine

Some scientists analyze fruit flies. Others use zebrafish. Many conduct studies with mice. But occasionally, researchers choose to experiment on a different animal: themselves. Consider the medical officer who in the early 1800s fed himself spoiled sausage to determine the source of foodborne botulism. Or the physician who in 1929 performed the world’s first cardiac catheterization on himself, and the young doctor who in 1984 guzzled Helicobacter pylori broth to prove that the bacterium causes ulcers. The latter two went on to win Nobel Prizes, but others haven’t been as fortunate. During the Spanish–American War, when yellow fever was killing thousands of U.S. soldiers, physician Jesse Lazear died after intentionally exposing himself to infected mosquitoes.

Medical martyrdom is rarer these days, in part due to increased regulation of human subject research after World War II, and fewer researchers dying for their work can only be a good thing. Nonetheless, autoexperimentation continues. The access to the subject is matchless, and the allure of big data and personalized medicine seems to be some nudging self-experimenters toward new types of studies. However, the regulatory environment remains somewhat vague, leaving it up to researchers to weigh practicality against ethical considerations. But if care and diligence accompany the appetite for adventure, scientists can responsibly conduct self-experimentation studies that help advance science—and potentially offer some fun and personal benefit to boot.

Balancing ease with ethics

For scientists whose work isn’t particularly risky, it’s hard to beat a prime motivation for self-experimentation: convenience. “It’s easy to draw your own blood and analyze it,” says Laura Stark, a bioethics historian at Vanderbilt University in Nashville. “You don’t have to worry about someone suing you or deciding you can’t use their sample.”

That was a key factor when Lawrence David, a Ph.D. student at the Massachusetts Institute of Technology at the time, and his adviser, bioengineer Eric Alm, sought to monitor how daily activities influenced the human gut and oral microbiomes over the course of a year. They needed to determine feasibility limits—for example, how frequently samples could be collected and how many variables could be measured. When the researchers couldn’t immediately find participants, they decided to enroll themselves. “We thought that by participating, we’d gain firsthand understanding of those limits,” says David, now an assistant professor of molecular genetics and microbiology at Duke University in Durham, North Carolina.

Each day, the two researchers saved spit samples and pooped into sterile bags. They used an iPad app to log their weight and everything they did and ate. Several months into the study, David went to Bangkok for a few weeks but stuck with the regimen, shipping home 3 to 5 pounds of stool on dry ice. That commitment eventually paid off when the results were published.

Russell Poldrack, a psychologist at Stanford University in Palo Alto, California, also had an ambitious study plan that required more than what the average participant would tolerate. That’s what led him to climb into an MRI machine every Tuesday and Thursday morning for 18 months to get his brain scanned. The idea started simmering years before, when Poldrack’s studies to understand psychiatric disorders stalled because they lacked a good control for normal brain function variability over time. At some point, he recalls, while he was directing the Imaging Research Center at the University of Texas (UT) at Austin, artist-in-residence Laurie Frick “really started pushing me, saying, ‘You’ve got this MRI scanner. Why aren’t you getting in there and scanning yourself?’”

While Poldrack was mulling over this possibility, Stanford geneticist Michael Snyder published a 2012 paper describing an “integrative Personal Omics Profile” of a 54-year-old male volunteer—himself. Snyder’s genome was sequenced and analyzed, and over 14 months, the research team made more than 3 billion measurements of his blood, saliva, mucus, urine, and feces. During the study—conducted as a proof of principle and to learn what a baseline “healthy” state looks like—Snyder discovered that he was genetically at risk for type 2 diabetes. With that information and the accompanying data, he was able to investigate biological pathways that kicked in as he developed signs of disease, which could have implications beyond Snyder’s individual health. Seeing Snyder’s work made Poldrack think that his crazy brain study might “not just be a goofy boutique project; it could actually have some scientific impact.”

He was right: His 18-month ordeal produced the most detailed map of functional brain connectivity in a single person to date.

Despite the potential advantages of using oneself as a subject, scientists contemplating this approach should consider research ethics guidelines. In the United States, the National Institutes of Health enacted policies in 1954 that restrict the use of employees as research subjects. The National Research Act, passed by Congress in 1974, requires research involving human subjects to be vetted by an institutional review board (IRB). Current rules, which date from 1981, outline additional protections for vulnerable groups, including pregnant women, children, and prisoners. U.S. federal law does not, however, explicitly address self-experimentation by a scientist or physician, says Jonathan Moreno, a bioethicist at the University of Pennsylvania. As Stark explains, it is “a blind spot in the current human subjects regulations.” That means that, at least for now, it is up to researchers to decide whether they’re comfortable experimenting on themselves and whether they need to seek IRB approval.

Conducting research in this vague regulatory environment can create confusion, even when researchers do everything they can to make sure they’re proceeding according to regulations and requirements. Before Poldrack started his brain study, for example, he submitted a proposal to the IRB at UT Austin, where he worked at the time. The board said that it did not consider his project to be human subjects research and therefore it did not require approval, so Poldrack got started collecting his scans without worrying about any further paperwork.

About 6 months after Poldrack started collecting data, however, the situation became more complicated. Researchers at Washington University School of Medicine in St. Louis learned of the study and wanted to use some of Poldrack’s data. When they checked with their IRB to see whether a formal protocol was required, they hoped the IRB would say it was unnecessary. After all, it was data being collected at a different institution that hadn’t required IRB approval—“essentially just a data transfer from our point of view,” says M.D.-Ph.D. student Tim Laumann, one of the researchers interested in accessing the data. However, the Washington University IRB did require a protocol to be written and approved—a process that took about a month even when expedited, Laumann says.

Looking back, Poldrack suspects that things could have gone more smoothly if he had gotten IRB approval from his institution to begin with. “It would have made data sharing much easier because the data would not have been living in an ethical gray zone”—although, he adds, other aspects of the study, such as the fact the data cannot be de-identified, “might also have raised issues even with IRB approval.” In the absence of hard-and-fast rules for self-experimentation, researchers wishing to study themselves should trust their best judgment while allowing for hiccups that could arise in this less-chartered realm.

The power of doing it yourself

Beyond administrative challenges, self-experimentation studies can raise questions about whether analyses of just a few individuals are scientifically valid. Self-monitoring experiments are not randomized or blinded like traditional human studies, and the experimenter’s personal involvement and motivations could make the research seem less objective.

Despite these concerns and caveats, there are scenarios where self-experimentation may be not only acceptable but optimal. Studies such as Poldrack’s, which aim to correlate hard-to-describe personal experiences such as mood or emotion with concrete measurements, for example, are among them because the researchers have particular expertise that makes them ideal subjects. Researchers “know the categories used to describe feelings and side effects and can articulate them in a way that translates easily into scientific language,” Stark says. Self-experimentation, therefore, can offer a way to calibrate tools and technologies that are otherwise hampered by relying on an individual’s subjective experience.

And for University of California, San Francisco, neuroscientist Adam Gazzaley, who develops video games to help improve brain function, the small sample size is exactly what he wants. The video games automatically adjust their difficulty based on the user’s performance, creating a personalized digital therapy, which is a key part of his lab’s effort to shift “away from just focusing on large populations and focusing more on the individual, the n of 1,” Gazzaley says. “We’re looking to understand more about how to make meaningful statements about data from a single person.”

<p>Adam Gazzaley underwent various measurements, including EEGs, as a participant in his own studies.</p>
Adam Gazzaley underwent various measurements, including EEGs, as a participant in his own studies. JO GAZZALEY

Every once in a while, when Gazzaley gives talks about the project, someone from the audience would ask him whether he played the games himself. His answer was “no” until the summer of 2015, when Gazzaley decided to put his time where his mouth is and became a research participant. For 2 months he played an hour of Body Brain Trainer, a physical and cognitive fitness game, three mornings a week. He also did 30 minutes of a meditation game called Meditrain on weeknight evenings, and for 3 weeks he played the newest game, Neurodrummer, which aims to improve cognition through rhythm training. He also had to get numerous measurements taken via saliva and blood samples, MRIs, EEGs, sleep tracking, heart rate monitoring, and more.

“Playing games I helped invent and being in studies I helped design and validate, but doing it from the perspective of a participant, was really helpful,” Gazzaley says. Experiencing firsthand the challenges of compliance, especially for something “not as quick as a pill,” has inspired Gazzaley to develop ways to not only push people to work harder during the game but also to sustain motivation over the long haul.

As for whether he plans to publish the data collected on himself, he says he might play the games again, perhaps annually, “to get a more longitudinal view.” For now, though, the personal reasons for self-experimentation could be just as strong as the scientific motivation. Now in his late 40s, Gazzaley says he is “approaching the age range of the adults we treat in some of our older studies. We know middle-aged folks have declining cognitive control. This seemed a great way for me to try and get out in front of it.”

Regardless of why scientists engage in self-experimentation, they should be transparent, making a public statement—perhaps a paragraph in the manuscript—explaining why they’re doing a study on themselves and what they hope to learn by conducting the research this way, Moreno says. “It says the researcher isn’t just using patients as guinea pigs.” Time will tell whether these types of studies establish worth that goes beyond provocative one-offs. Then again, with certain research questions, he adds, “if you don’t give it a shot, you may never know.”

doi:10.1126/science.caredit.a1600160

Esther Landhuis

Esther Landhuis is a freelance science journalist based in the San Francisco Bay area.

Why don’t more doctors and scientist use themselves as subjects since they have confidence in their trails? Why don’t patients know about informed consent? Why don’t doctors and scientist get it? Why are clinical trials infamous for their fines? Are unethical medical experiments really a thing of the past? Why are there so many modern day instances of unethical medical experiments?

Share your comments with the community by posting them below. Share the wealth of health with your friends and family by sharing this article with 3 people today. As always you are the best part of what we do. Keep sharing!

If these articles have been helpful to you and yours, give a donation to Shidonna Raven Garden and Cook Ezine today.

Posted on Leave a comment

THE NUREMBERG CODE AND ITS IMPACT ON CLINICAL RESEARCH

Posted by Natalie Jarmusik on Tue, Apr 9, 2019
Source: IMA Research

Created more than 70 years ago following the notorious World War II experiments, this written document established 10 ethical principles for protecting human subjects. 

What Is the Nuremberg Code?

When World War II ended in 1945, the victorious Allied powers enacted the International Military Tribunal on November 19th, 1945.  As part of the Tribunal, a series of trials were held against major war criminals and Nazi sympathizers holding leadership positions in political, military, and economic areas.  The first trial conducted under the Nuremberg Military Tribunals in 1947 became known as The Doctors’ Trial, in which 23 physicians from the German Nazi Party were tried for crimes against humanity for the atrocious experiments they carried out on unwilling prisoners of war.  Many of the grotesque medical experiments took place at the Auschwitz concentration camp, where Jewish prisoners were tattooed with dehumanizing numbers onto their arms; numbers that would later be used to identify their bodies after death.

The Doctors’ Trial is officially titled “The United States of America v. Karl Brandt, et al.,” and it was conducted at the Palace of Justice in Nuremberg, Bavaria, Germany.  The trial was conducted here because this was one of the few largely undamaged buildings that remained intact from extensive Allied bombing during the war.  It is also said to have been symbolically chosen because it was the ceremonial birthplace of the Nazi Party.  Of the 23 defendants, 16 were found guilty, of which seven received death sentences and nine received prison sentences ranging from 10 years to life imprisonment. The other 7 defendants were acquitted. 

The verdict also resulted in the creation of the Nuremberg Code, a set of ten ethical principles for human experimentation. 

What Are The Nuremberg Code’s Ethical Guidelines For Research?

The Nuremberg Code aimed to protect human subjects from enduring the kind of cruelty and exploitation the prisoners endured at concentration camps. The 10 elements of the code are: 

  1. Voluntary consent is essential
  2. The results of any experiment must be for the greater good of society
  3. Human experiments should be based on previous animal experimentation
  4. Experiments should be conducted by avoiding physical/mental suffering and injury
  5. No experiments should be conducted if it is believed to cause death/disability
  6. The risks should never exceed the benefits
  7. Adequate facilities should be used to protect subjects
  8. Experiments should be conducted only by qualified scientists
  9. Subjects should be able to end their participation at any time
  10. The scientist in charge must be prepared to terminate the experiment when injury, disability, or death is likely to occur

Want to learn more about the history of clinical research? Take a minute to watch the video and explore our History of Clinical Research timeline for more detail.

The History of Clinical Research Timeline by IMARC Research

Shidonna Raven Garden and Cook
Source: IMA Research
Shidonna Raven Garden and Cook

The Significance Of The Nuremberg Code

The Nuremberg Code is one of several foundational documents that influenced the principles of Good Clinical Practice (GCP)

Good Clinical Practice is an attitude of excellence in research that provides a standard for study design, implementation, conduct and analysis. More than a single document, it is a compilation of many thoughts, ideas and lessons learned throughout the history of clinical research worldwide.

Several other documents further expanded upon the principles outlined in the Nuremberg Code, including the Declaration of Helsinki, the Belmont Report and the Common Rule. 

Although there has been updated guidance to Good Clinical Practice to reflect new trends and technologies, such as electronic signatures, these basic principles remain the same. The goal has always been—and always will be—to conduct ethical clinical trials and protect human subjects. 

Share your comments with the community by posting them below. Share the wealth of health with your friends and family by sharing this article with 3 people today. As always you are the best part of what we do. Keep sharing!

If these articles have been helpful to you and yours, give a donation to Shidonna Raven Garden and Cook Ezine today.

Posted on Leave a comment

You Might Be in a Medical Experiment and Not Even Know It

Shidonna Raven Garden and Cook

By Alice Dreger, January 31, 2017 1:34 PM
Source: Discover Magazine
Feature Photo Source: Unsplash, Hush Naidoo 

In the long view, modern history is the story of increasing rights of control over your body – for instance, in matters of reproduction, sex, where you live and whom you marry. Medical experimentation is supposed to be following the same historical trend – increasing rights of autonomy for those whose bodies are used for research.

Indeed, the Nuremberg Code, the founding document of modern medical research ethics developed after the Second World War in response to Nazi medical experiments, stated unequivocally that the voluntary, informed consent of the human subject is essential. Every research ethics code since then has incorporated this most fundamental principle. Exceptions to this rule are supposed to be truly exceptional.

Yet today, more and more medical experimenters in the United States appear to circumvent getting the voluntary, informed consent of those whose bodies are being used for research. What’s more, rather than fighting this retrograde trend, some of the most powerful actors in medical research are defending it as necessary to medical progress.about:blankabout:blank

A few years ago, I fell in with a growing group of professionals in medicine and allied fields such as bioethics who have mobilised to defend the right to informed consent in medical experimentation. As a historian of medicine, I had worked since 1996 with intersex rights activists on improving care for children born with bodies in between the male and female types. In 2009, colleagues alerted me that a group of parents judged ‘at risk’ of having a child born with a particular genetic intersex condition appeared to be unwitting subjects in a medical experiment.

A major researcher and physician was promoting the prenatal use of a drug (dexamethasone) aimed at preventing intersex development. Targeting would-be parents who knew they had this condition running in their families, the researcher told them that the ‘treatment’ had been ‘found safe for mother and child’.

In fact, the US Food and Drug Administration (FDA) has not approved dexamethasone for preventing intersex development, much less found it ‘safe’ for this use. Indeed, the FDA has noted dexamethasone causes harm in foetal animals exposed to it. No one seems to have told the parents that this ‘treatment’ had not gone through anything like the normal route of drug approval: there has been no animal modelling of this use, no blinded control trial for effectiveness, and no long-term prospective safety trials in the US, where thousands of foetuses appear to have been exposed.

Shockingly, at the same time that this researcher was pushing the ‘treatment’ as ‘safe’, she was obtaining grants from the US National Institutes of Health (NIH) to use the same families in retrospective studies to see if it had been safe. A Swedish research group has recently confirmed – through fully consented, prospective studies – that this drug use can cause brain damage in the children exposed prenatally.

As I sought allies in defending the rights of these families, I discovered that, while this was an especially egregious case of failure to obtain informed consent to what amounted to a medical experiment, the lapse was not unique. Public Citizen’s Health Research Group, a Washington-based NGO, has been leading the work in tracking cases where medical researchers fail in their obligations to obtain informed consent.about:blankabout:blank

Recently, Public Citizen, together with the American Medical Student Association, sounded an alarm about two clinical trials, one called iCOMPARE, the other FIRST. In these studies, researchers extended the working hours of newly trained physicians to see if these physicians and their patients were better or worse off with the most inexperienced doctors working longer, more tiring shifts.

The young doctors used in these studies were not given the option of not participating. If their residency programmes participated, they were in. More concerning, their patients were never informed that they were experimental subjects, even though a primary research goal was to see if patients treated by residents working longer shifts would experience higher rates of harm.

Some studies tracked by Public Citizen reveal downright bizarre ethical mistakes. A recent study funded by the US Department of Health and Human Services, led by a US Department of Veterans Affairs researcher, sought to determine whether, if brain-dead kidney donors’ bodies were cooled after brain death, living recipients of the transplanted kidneys did better. The researchers decided they didn’t need to get voluntary consent to the experiment from the living kidney recipients. They simply maintained the dead donors were the experimental subjects.

The largest contemporary fight over failure to obtain informed consent has been over the Surfactant Positive Airway Pressure and Pulse Oximetry Trial (or SUPPORT). This was a large NIH-funded study meant to determine, in part, whether higher or lower levels of oxygen after birth provided very premature babies with benefit or harm. The consent forms for this study did not inform the parents that the experiment’s purpose was to see if, by being randomly assigned to one of two experimental oxygen ranges, babies end up more likely to be blind, neurologically damaged or die.

Most parents also weren’t informed that the researchers would use experimental measuring devices meant to ‘blind’ professional caregivers to the babies’ real oxygen levels to try to make the study more rigorous. Researchers told many parents that the study involved no special risks because all the procedures in the research were supposedly standard of care. This was a demonstrably untrue claim.

In this case, the US Office for Human Research Protections (OHRP) – an agency meant to protect the rights of people in federally funded research – agreed with Public Citizen and an allied group of more than 40 of us in medicine and bioethics that the informed consent for this trial was seriously inadequate. But in a series of emails meant to stay private, top NIH officials pressured the OHRP to back off its criticisms. OHRP is supposed to oversee NIH’s work, not the other way around!

NIH leaders also partnered with the editor of The New England Journal of Medicineto publicly defend this study. The journal’s editor-in-chief tried actively to limit the ability of us critics to respond. Meanwhile, the parents were never officially informed of what happened to their babies.

Those defending these troubling studies often argue that elaborate consent procedures can get in the way of obtaining important scientific results. They say that subjects might encounter the risks of the experiment even in ‘normal’ patient care, so we might as well engage them in studies without scaring them off through frightening research consent forms.

It is true that the current research ethics system in the US is cumbersome, inefficient and dysfunctional. Researchers often find themselves confused and frustrated by the bureaucracies of research ethics systems.

But that is no excuse not to vigorously maintain the first principle of the Nuremberg Code: the voluntary consent of the subject is essential. We can’t afford the risk to medical research that sloppy ethics entail; when the public finds out about the circumvention of informed consent – as in the case of the infamous US Public Health Service syphilis study at Tuskegee – the damage to the integrity and authority of the medical research community is inevitably significant and long-lasting.about:blank

The tenets of the Nuremberg Code were not meant only for Nazis. If Nazis presented the only danger to people being used for medical experiments, eliminating the Nazis would have solved our problems. The Nuremberg Code was written to guide all of us, because good intentions are not enough.

This article was originally published at Aeon and has been republished under Creative Commons.

Share your comments with the community by posting them below. Share the wealth of health with your friends and family by sharing this article with 3 people today. As always you are the best part of what we do. Keep sharing!

If these articles have been helpful to you and yours, give a donation to Shidonna Raven Garden and Cook Ezine today.

Posted on Leave a comment

Department of Computer Science Research

Robots Shidonna Raven Garden and Cook

Advancing Robotics Technology for Societal Impact (ARTSI) Alliance

PI: Dr. Chutima Boonthum
Co-PI: Mr. Solomon Isekeje (Department of Fine and Performing Art)
Source: School of Science, Hampton University
Feature Photo Source: Unsplash, Possessed-Photography

The Department of Computer Science, School of Science, Hampton University received a $125,667 grant (2007-2010) from the National Science Foundation (NSF) to enhance the robotics programs for undergraduate students and to create outreach events for local K-12 students. The award is a part of a $2 million grant awarded to the Advancing Robotics Technology for Societal Impact (ARTSI) Alliance, a collaboration of institutions including eight Historically Black Colleges and Universities (HBCU) and seven Carnegie Research I Institutions.

How do you think robots impact our society? How do you think robots impact other societies? Why?

Share your comments with the community by posting them below. Share the wealth of health with your friends and family by sharing this article with 3 people today. As always you are the best part of what we do. Keep sharing!

If these articles have been helpful to you and yours, give a donation to Shidonna Raven Garden and Cook Ezine today.

Posted on Leave a comment

Eliot Coleman – The New Organic Grower

Eliot Coleman the new organic grower shidonna raven

Start something new and you will discover just how much you do not know. This was true for us when we started our Organic Garden. We knew the basics that we learned in science class at school but that was about it. Well we had learned somethings along the way and knew enough to be dangerous. So, we decided to start at the library for our research. We have found the library to be a wonderful resource and wealth of information as they always are. We were tremendously lucky to have come across this book: The New Organic Grower by Eliot Coleman. We not only found Eliot Coleman to be talented and successful at gardening and farming, he is a thoughtful and highly researched student of the discipline and a Master at his craft. He deals in paradigms and concepts as well as detailed applications of his craft. He is someone we definitely hope to meet one day.

Sharon weeding in the garden
Shidonna Raven Garden and Cook

As we stated, we had much room to grow in the gardening department when it came to learning. The New Organic Grower by Eliot Coleman answered many questions for us as people very interested but knew to organic gardening. He helped dispel myths and common misunderstandings. He gave us a practical and authentic definition of what organic really means. It is one thing to preach organics and something different entirely to live it. Eliot not only practiced what he preaches he helped us implement it in our own garden with some practical and realistic things we could do in a garden of our size. I have referred to many of Eliot Coleman’s suggestions in his book and shared them with you. Because we think they are that valuable. He introduced us to Soil Blocks and so far they have been a tremendous game changer for our garden and disease prevention. This book has by far been the most valuable book we checked out of the library and most valuable resource we have found on our organic journey thus far.

What have you found to be the most valuable resource for you thus far? How has this helped you become a better consumer of food not just gardener? Do you have an organic resource that has been helpful to you? Share it with us by leaving a comment and sending us pictures. As always you are the best part of what we do.

The Garden before we Broke Ground
Shidonna Raven Garden and Cook