Sydney Brenner, a Decipherer of the Genetic Code, Is Dead at 92

Sydney Brenner, a Decipherer of the Genetic Code, Is Dead at 92

Sydney Brenner, a South African-born biologist who helped determine the nature of the genetic code and shared a Nobel Prize in 2002 for developing a tiny transparent worm into a test bed for biological discoveries, died on Friday in Singapore. He was 92.

He had lived and worked in Singapore in recent years, affiliated with the government-sponsored Agency for Science, Technology and Research, which confirmed his death.

A witty, wide-ranging scientist, Dr. Brenner was a central player in the golden age of molecular biology, which extended from the discovery of the structure of DNA in 1953 to the mid-1960s. He then showed, in experiments with a roundworm known as C. elegans, how it might be possible to decode the human genome. That work laid the basis for the genomic phase of biology.

Later, in a project still coming to fruition, he focused on understanding the functioning of the brain.

“I think my real skills are in getting things started,” he said in his autobiography, “My Life in Science” (2001). “In fact, that’s what I enjoy most, the opening game. And I’m afraid that once it gets past that point, I get rather bored and want to do other things.”

As a young South African studying at Oxford University, he was one of the first people to view the model of DNA that had been constructed in Cambridge, England, by Francis H. C. Crick and James D. Watson. He was 22 at the time and would call it the most exciting day of his life.

“The double helix was a revelatory experience; for me everything fell into place, and my future scientific life was decided there and then,” Dr. Brenner wrote.

Impressed by Dr. Brenner’s insights and ready humor, Dr. Crick recruited him to Cambridge a few years later. Dr. Crick, a theoretical biologist, liked to have with him someone he could bounce ideas off. Dr. Watson had played this role in the discovery of DNA, and Dr. Brenner became his successor, sharing an office with Dr. Crick for 20 years at the Medical Research Council Laboratory of Molecular Biology at Cambridge.

The fundamental elements of molecular biology were uncovered during this period, many of them by Dr. Crick or Dr. Brenner. Their chief pursuit for 15 years was to understand the nature of the genetic code.

Dr. Brenner, left, received the Nobel Prize from King Carl Gustaf of Sweden at the Concert Hall in Stockholm in December 2002. Dr. Brenner shared it with two other scientists and believed he had deserved a second Nobel, for his work on the decoding of DNA.CreditHenrik Montgomery, via Associated Press
Dr. Brenner, left, received the Nobel Prize from King Carl Gustaf of Sweden at the Concert Hall in Stockholm in December 2002. Dr. Brenner shared it with two other scientists and believed he had deserved a second Nobel, for his work on the decoding of DNA.CreditHenrik Montgomery, via Associated Press

Dr. Brenner made a decisive contribution to solving the code with an ingenious series of experiments in which he altered the DNA of a virus that attacks bacteria.

He showed that by making a series of three mutations, the virus would first lose, then regain, its ability to make a certain protein, as if the cell’s reading of the DNA “tape” had come back into correct phase. The experiment showed that DNA is a triplet code, with each group of three DNA letters specifying one of the 20 kinds of amino acids of which proteins are composed. Dr. Brenner gave these triplets the name codon.

Other researchers were then able to figure out which codon specified each of the 20 amino acids. It fell to Dr. Brenner to identify two of the three triplets that signal “Stop” to the cell’s protein-making machinery.

Dr. Brenner was also the first to conclude that there must be some means for copying the information in DNA and conveying it to the cellular organelles that manufacture proteins. That intermediary, now known as messenger RNA, was discovered in 1960 in an experiment devised by Dr. Brenner and others.

With the fundamental problems of molecular biology solved, as they saw it, Dr. Brenner and Dr. Crick looked for new areas of inquiry. Dr. Brenner decided to approach the brain, but he realized he needed a simpler animal to study than the fruit fly, a standard organism used in laboratories.

He settled on Caenorhabditis elegans, or C. elegans, a tiny, transparent roundworm that dwells in the soil, eats bacteria and completes its life cycle in three weeks. That worm has spun off many developments, starting with the decoding of the human genome.

Using the worm, Dr. Brenner and his colleagues first worked out methods for breaking a genome into fragments, multiplying each fragment in a colony of bacteria, and then decoding each cloned fragment with DNA sequencing machines. His colleagues John Sulston and Robert Waterston completed the worm’s genome in 1998, and they and others used the same methods to decode the human genome in 2003.

Another major project, made possible because of the worm’s transparency, was to track the lineage of all 959 cells in the adult worm’s body, starting from the single egg cell. This feat, accomplished so far for no other animal, made clear that many cells are programmatically killed during development, leading to the discovery by H. Robert Horvitz of the phenomenon of programmed cell death.

The topic assumed an importance that transcended worm biology when it emerged that programmed cell death is supposed to occur in damaged human cells, and that cancer can thwart this process.

For their work on programmed cell death, Dr. Brenner, Dr. Sulston (who died last year) and Dr. Horvitz were awarded the Nobel Prize in Physiology or Medicine in 2002. But many people, including Dr. Brenner himself, believed he should have been awarded a Nobel much earlier for his and Dr. Crick’s work on the genetic code.

Dr. Brenner, seated second from right, with other 1971 winners of the prestigious Lasker Award in medical science. The others, from left, are the medical researcher Edward D. Freis and the geneticists Seymour Benzer and Charles Yanofsky. At rear are the heart surgeon Dr. Michael E. DeBakey, who was chairman of the Lasker jury, and Mary Lasker, president of the Albert and Mary Lasker Foundation.CreditEddie Hausner/The New York Times

Dr. Brenner, seated second from right, with other 1971 winners of the prestigious Lasker Award in medical science. The others, from left, are the medical researcher Edward D. Freis and the geneticists Seymour Benzer and Charles Yanofsky. At rear are the heart surgeon Dr. Michael E. DeBakey, who was chairman of the Lasker jury, and Mary Lasker, president of the Albert and Mary Lasker Foundation.CreditEddie Hausner/The New York Times

“On more than one occasion, in fact, he has claimed that he is delighted to have been awarded two Nobel Prizes — the first he never received!” his biographer, Errol C. Friedberg, wrote.

Sydney Brenner was born to Jewish immigrants in Germiston, a small town near Johannesburg, on Jan. 13, 1927. His father, Morris, a cobbler who could not read or write, had fled Lithuania to escape conscription in the czar’s army. His mother, Leah (Blecher) Brenner, was an émigré from Latvia.

Sydney was taught to read by a neighbor. When a customer at his father’s shop learned that Sydney, at age 4, could read English fluently but that his father could not afford to send him to school, the customer paid the boy’s tuition.

At 15, Sydney won a scholarship to study medicine at the University of the Witwatersrand in Johannesburg. The scholarship covered only his fees, but he managed to afford university life by earning the equivalent of five cents a day by attending synagogue to help form a minyan, the quorum of 10 men required for public prayer.

During his medical training he became interested in scientific research while growing disenchanted with clinical medicine. After finishing medical school in 1951, he won a scholarship to Oxford to work on bacteriophages, the viruses that attack bacteria.

The scholarship required him to return to South Africa. In 1952, he married a fellow South African, May (Covitz) Balkind, who was divorced and had a son by an earlier marriage. She went on to a career as an educational psychologist, and she and Dr. Brenner had three children of their own.

Dr. Crick was eventually able to find Dr. Brenner a post in Cambridge, and in 1956 he returned with his family to England for good.

Dr. Crick, a physicist by training, was a theoretician, but Dr. Brenner was deeply interested in the practice of biology as well. He loved the laboratory, and he loved designing elegant experiments. As a student in South Africa, he had built his own centrifuge. If he had wanted to stain a cell, he first had to synthesize the dye.

At Oxford, “he threw himself into bacteriophage research with the energy of a man digging a tunnel out of prison,” Horace Freeland Judson wrote in “The Eighth Day of Creation” (1979), a history of molecular biology.

Dr. Brenner’s most ambitious project after the genetic code — understanding the brain of the worm — was in a formal sense a failure. His colleague John White, after a decade peering through a microscope, established that the worm’s brain consists of 302 neurons, with more than 7,000 connections between them. But the job of then computing the worm’s behavior, which was Dr. Brenner’s goal, has so far proved too daunting.

Dr. Brenner at home in the La Jolla section of San Diego in 2003. For many years he divided his time between California and Cambridge, England, before taking up permanent residence in Singapore.CreditRobert Burroughs

Dr. Brenner at home in the La Jolla section of San Diego in 2003. For many years he divided his time between California and Cambridge, England, before taking up permanent residence in Singapore.CreditRobert Burroughs

Dr. Brenner’s wife died in 2010. His survivors include their three children, Belinda, Carla and Stefan. His stepson, Jonathan Balkind, died last year.

In the early 1990s, Dr. Brenner went to work at the Scripps Research Institute in San Diego on a fellowship. In 1996, with a multimillion-dollar grant from the Philip Morris Company, he established and directed the nonprofit Molecular Sciences Institute in Berkeley, with a mission, in part, to track research in various genome sequencing projects.

From 1994 to 2000 he wrote an opinion column for the journal Current Biology. He originally called it Loose Ends but later changed the name to False Starts when it was moved to the front of the publication.

Among his many honors, besides the Nobel, was the prestigious Lasker Award in medical science, given to him in 1971.

Dr. Brenner held positions at Cambridge and at the Salk Institute in San Diego, where he was appointed, as he termed it, “extinguished professor.”

He had divided his time between Cambridge and California until, with his health declining, he took up permanent residence in Singapore while working as an adviser to the research agency. In 2003 he was named an honorary citizen of Singapore. He had been advising the Singapore government on science policy since the 1980s and was instrumental in the founding of the Molecular Engineering Laboratory there.

British and Singaporean news organizations said that Dr. Brenner, a former heavy smoker, had been treated for lung disease in recent years.

Known for his wit, Dr. Brenner boasted that aside from science, “the other thing I’m rather good at is talking.”

It was hard for any listener not to fall under his spell. He spoke slowly and precisely in a lingering South African accent, his sentences long and perfectly constructed and often ending with a joke. Insights into the nature of the cell would alternate with his playful inventions, like Occam’s broom — “to sweep under the carpet what you must to leave your hypotheses consistent” — or Avocado’s number, “the number of atoms in a guacamole.”

For a short time he had been director of the Cambridge Laboratory of Molecular Biology, but he did not much enjoy working as an administrator.

“You become a mediator between two impossible groups,” he said, “the monsters above and the idiots below.”

This article was originally published in The New York Times. Read the original article.

Revealing the wheat genome could lead to hypoallergenic bread

Revealing the wheat genome could lead to hypoallergenic bread

ONE of the most surprising things about the announcement this week that the genome of wheat has been fully mapped is how long it has taken. As well as the human genome, a draft of which was completed in 2000, scientists have tackled everything from rice to the clearhead icefish and the black cottonwood tree. The world’s most widely cultivated crop has taken all this time because it was really difficult; the “Mount Everest” of plant genetics, according to some.

That difficulty arises from the fact that wheat is not one genome but three overlapping and similar ones, the result of natural hybridisation. It is more than five times the size of the human genome and comprises some 107,000 genes (humans have about 24,000). Genomes are generally figured out by breaking them into smaller pieces, sequencing those pieces and then working out how they fit together. With so many similar-looking sequences, the international team of researchers, whose findings were reported in Science and whose efforts focused on a variety of bread wheat called Chinese Spring, had a huge job on their hands.

Their achievement comes at an opportune time. Humans have been tinkering with wheat for almost 10,000 years, but new tools are becoming available for the precise manipulation of genomes. Gene-editing using a technique called CRISPR, along with a fully annotated genetic sequence, promises a new era in wheat cultivation, introducing traits to improve yields, provide greater pest resistance and to develop hardier varieties.

Of particular interest will be how decoding the genes might contribute to understanding, and perhaps even mitigating, various immune diseases and allergies associated with eating bread. This possibility is explored by Angela Juhász of Murdoch University, in Western Australia, and her colleagues in an associated paper in Science Advances.

Coeliac disease, for instance, is an immune reaction to eating gluten; the related genes are the glutenins and gliadins that are expressed in the starchy endosperm of the wheat grain. A different set of allergens, including amylase trypsin inhibitors found in a thin layer of cells that surround the endosperm, are implicated in an illness called baker’s asthma; these could be of concern to people who suffer non-coeliac wheat sensitivity.

One possibility is using diagnostic techniques to identify wheat varieties that contain gluten which is easier to digest, says Rudi Appels, another of the associated paper’s authors. Normally the gut can break down the large proteins found in gluten, but when this process fails and those proteins arrive in the lower gut they interact with the gut’s membrane and cause immune problems. In the future, says Dr Appels, wheat might also be fine-tuned to be less allergenic. This might be done by editing the wheat genome so that it contains more digestible proteins.

Those who have trouble with gluten may find, however, that the source of their problem lies more in the processing of bread rather than the genetics of wheat. Dr Appels says that many commercial bakers use processes that eliminate part of the traditional fermentation stage in making bread. And his guess is that this fermentation would have broken up the problematic proteins into smaller and more digestible pieces. This might explain why some coeliacs (and, indeed, some others who complain that bread is indigestible) can happily eat sourdough bread, which is still made using traditional methods.

It also seems that non-coeliac wheat sensitivity might not be due to gluten at all, but a poorly absorbed carbohydrate component of wheat: fructans and galacto-oligosaccharides, along with another allergen, the amylase trypsin inhibitors which are implicated in activating the innate immune system. Again, fermented bread may have fewer of these hard-to-digest bits. Now that scientists have the genome, such theories should be easier to prove.



This article was originally published in The Economist. Read the original article.

Large Study Identifies Genetic Variants Linked to Risk Tolerance and Risky Behaviors

Large Study Identifies Genetic Variants Linked to Risk Tolerance and Risky Behaviors

An international group that includes researchers at University of California San Diego School of Medicine has identified 124 genetic variants associated with a person’s willingness to take risks, as reported in a study published January 14 in Nature Genetics .

The researchers emphasize that no variant on its own meaningfully affects a particular person’s risk tolerance or penchant for making risky decisions — such as drinking, smoking, speeding — and non-genetic factors matter more for risk tolerance than genetic factors. The study shows evidence of shared genetic influences across both an overall measure of risk tolerance and many specific risky behaviors.

The genetic variants identified in the study open a new avenue of research on the biological mechanisms that influence a person’s willingness to take risks.

“Being willing to take risks is essential to success in the modern world,” said study co-author Abraham Palmer, PhD, professor of psychiatry and vice chair for basic research at UC San Diego School of Medicine. “But we also know that taking too many risks, or not giving enough weight to the consequences of risky decisions, confers vulnerability to smoking, alcoholism and other forms of drug addiction.”

The genetic variants identified in this study open a new avenue of research on the biological mechanisms that influence a person’s willingness to take risks. Photo courtesy of Pixabay

Palmer’s lab, which includes co-author Sandra Sanchez-Roige, PhD, is working to understand the genetic basis of individual differences in impulsive and risky decision-making styles. They want to understand the fundamental molecular and cellular processes that shape human behavior, and learn how to prevent and treat drug abuse.

“Risk-taking is thought to play a role in many psychiatric disorders,” said co-author Murray Stein, MD, MPH, Distinguished Professor of in the departments of Psychiatry and Family Medicine and Public Health, and vice-chair for clinical research in psychiatry at UC San Diego School of Medicine. “For example, patients with anxiety disorders may perceive increased risk in certain situations and therefore avoid them unnecessarily. Understanding the genetic basis for risk tolerance is critical to understanding these disorders and developing better treatments.”

The team measured participants’ overall risk tolerance based on self-reports. They found that genetic variants associated with overall risk tolerance tend to also be associated with more risky behaviors, such as speeding, drinking, tobacco and cannabis consumption, and with riskier investments and sexual behaviors. They also found shared genetic influences on overall risk tolerance and several personality traits and neuropsychiatric traits, including ADHD, bipolar disorder, and schizophrenia.

The effects of each of the 124 genetic variants on an individual basis are all very small, but the researchers found their combined impact can be significant.

“The most important variant explains only 0.02 percent of the variation in overall risk tolerance across individuals,” said senior author Jonathan Beauchamp, PhD, assistant professor of economics at the University of Toronto. “However, the variants’ effects can be combined to account for greater variation in risk tolerance.”

The researchers created a polygenic score, which captures the combined effects of 1 million genetic variants and statistically accounts for approximately 1.6 percent of the variation in general risk tolerance across individuals. They say the score could be used to study how genetic factors interact with environmental variables to affect risk tolerance and risky behaviors, but they caution that the score cannot meaningfully predict a particular person’s risk tolerance or risk taking behavior.

The 124 genetic variants associated with risk tolerance are located in 99 separate regions of the genome. The study found no evidence to support previously reported associations between risk tolerance and genes related to the neurochemicals dopamine or serotonin, which are involved in the processing of rewards and mood regulation.

Instead, the findings suggest that the neurochemicals glutamate and GABA contribute to variation in risk tolerance across individuals. Both are important regulators of brain activity in humans and animals — glutamate is the most abundant neurotransmitter in the body and boosts communication between neurons, whereas GABA inhibits it.

“Our results point to the role of specific brain regions — notably the prefrontal cortex, basal ganglia and midbrain — that have previously been identified in neuroscientific studies on decision-making,” Beauchamp said. “They conform with the expectation that variation in risk tolerance is influenced by thousands, if not millions, of genetic variants.”

The data for this study were from the UK Biobank, the personal genomics company 23andMe, and 10 other, smaller genetic datasets.

The study was led by 96 researchers in the Social Science Genetic Association Consortium, which investigates the influence of genetics on human behavior, well-being and social science-related outcomes through large-scale studies of human genomes.

The Biological Roots of Intelligence

The Biological Roots of Intelligence

In 1987, political scientist James Flynn of the University of Otago in New Zealand documented a curious phenomenon: broad intelligence gains in multiple human populations over time. Across 14 countries where decades’ worth of average IQ scores of large swaths of the population were available, all had upward swings—some of them dramatic. Children in Japan, for example, gained an average of 20 points on a test known as the Wechsler Intelligence Scale for Children between 1951 and 1975. In France, the average 18-year-old man performed 25 points better on a reasoning test in 1974 than did his 1949 counterpart.1

Flynn initially suspected the trend reflected faulty tests. Yet in the ensuing years, more data and analyses supported the idea that human intelligence was increasing over time. Proposed explanations for the phenomenon, now known as the Flynn effect, include increasing education, better nutrition, greater use of technology, and reduced lead exposure, to name but four. Beginning with people born in the 1970s, the trend has reversed in some Western European countries, deepening the mystery of what’s behind the generational fluctuations. But no consensus has emerged on the underlying cause of these trends.

A fundamental challenge in understanding the Flynn effect is defining intelligence. At the dawn of the 20th century, English psychologist Charles Spearman first observed that people’s average performance on a variety of seemingly unrelated mental tasks—judging whether one weight is heavier than another, for example, or pushing a button quickly after a light comes on—predicts our average performance on a completely different set of tasks. Spearman proposed that a single measure of general intelligence, g, was responsible for that commonality.

Scientists have proposed biological mechanisms for variations among individuals’ g levels ranging from brain size and density to the synchrony of neural activity to overall connectivity within the cortex. But the precise physiological origin of g is far from settled, and a simple explanation for differences in intelligence between individuals continues to elude researchers. A recent study of 1,475 adolescents across Europe reported that intelligence, as measured by a cognitive test, was associated with a panoply of biological features, including known genetic markers, epigenetic modifications of a gene involved in dopamine signaling, gray matter density in the striatum (a major player in motor control and reward response), and the striatum’s activation in response to a surprising reward cue.2

Understanding human smarts has been made even more challenging by the efforts of some inside and outside the field to introduce pseudoscientific concepts into the mix. The study of intelligence has at times been tainted by eugenics, “scientific” racism, and sexism, for example. As recently as 2014, former New York Times science writer Nicholas Wade drew fire for what critics characterized as misinterpreting genetics studies to suggest race could correlate with average differences in intelligence and other traits. The legitimacy of such analyses aside, for today’s intelligence researchers, categorization isn’t the end goal.

“The reason I’m interested in fluid intelligence tests”—which home in on problem-solving ability rather than learned knowledge—“is not really because I want to know what makes one person do better than another,” says University of Cambridge neuroscientist John Duncan. “It’s important for everybody because these functions are there in everybody’s mind, and it would be very nice to know how they work.”

In search of g
G, and the IQ (or intelligence quotient) tests that aim to measure it, have proven remarkably durable since Spearman’s time. Multiple studies have backed his finding of a measurable correlation among an individual’s performances on disparate cognitive tests. And g interests researchers because its effects extend far beyond academic and work performance. In study after study, higher IQ is tied to outcomes such as greater income and educational attainment, as well as to lower risks of chronic disease, disability, and early death.

Early studies of people with brain injuries posited the frontal lobes as vital to problem solving. In the late 1980s, Richard Haier of the University of California, Irvine, and colleagues imaged the brains of people as they solved abstract reasoning puzzles, which revved up specific areas in the frontal, parietal, and occipital lobes of the brain, as well as communication between them. The frontal lobes are associated with planning and attention; the parietal lobes interpret sensory information; and the occipital lobe processes visual information—all abilities useful in puzzle solving. But more activity didn’t mean greater cognitive prowess, notes Haier. “The people with the highest test scores actually showed the lowest brain activity, suggesting that it wasn’t how hard your brain was working that made you smart, but how efficiently your brain was working.”

In 2007, based on this and other neuroimaging studies, Haier and the University of New Mexico’s Rex Jung proposed the parieto-frontal integration theory, arguing that the brain areas identified in Haier’s and others’ studies are central to intelligence.3 (See infographic.) But Haier and other researchers have since found that patterns of activation vary, even between people of similar intelligence, when performing the same mental tasks. This suggests, he says, that there are different pathways that the brain can use to reach the same end point.

The people with the highest test scores actually showed the lowest brain activity, suggest­ing that it wasn’t how hard your brain was working that made you smart, but how effi­ciently your brain was working.

—Richard Haier, University of California, Irvine
Another problem with locating the seat of g via brain imaging, some argue, is that our instruments are still simply too crude to yield satisfying answers. Haier’s PET scans in the 1980s, for instance, tracked radiolabeled glucose through the brain to get a picture of metabolic activity during a 30-minute window in an organ whose cells communicate with one another on the order of milliseconds. And modern fMRI scans, while more temporally precise, merely track blood flow through the brain, not the actual activity of individual neurons. “It’s like if you’re trying to understand the principles of human speech and all you could listen to is the volume of noise coming out of a whole city,” Duncan says.

Models of intelligence
Beyond simply not having sharp-enough tools, some researchers are beginning to question the premise that the key to intelligence can be seen in the anatomical features of the brain. “The dominant view of the brain in the 20th century was anatomy is destiny,” says neurophysiologist Earl Miller of MIT’s Picower Institute for Learning and Memory; but it’s become clear over the past 10 to 15 years that this view is too simplistic.

Researchers have begun to propose alternative properties of the brain that might undergird intelligence. Miller, for example, has been tracking the behavior of brain waves, which arise when multiple neurons fire in synchrony, for clues about IQ. In one recent study, he and colleagues hooked up EEG electrodes to the heads of monkeys that had been taught to release a bar if they saw the same sequence of objects they’d seen a moment before. The task relied on working memory, the ability to access and store bits of relevant information, and it caused bursts of high-frequency γ and lower-frequency β waves. When the bursts weren’t synchronized at the usual points during the task, the animals made errors.4

Parsing Smartness
The biological basis for variations in human intelligence is not well understood, but research in neuroscience, psychology, and other fields has begun to yield insights into what may undergird such differences. One well-known hypothesis, backed by evidence from brain scans and studies of people with brain lesions, proposes that intelligence is seated in particular clusters of neurons in the brain, many of them located in the prefrontal and parietal cortices. Known as the fronto-parietal integration, the hypothesis holds that the structure of these areas, their activity, and the connections between them vary among individuals and correlate with performance on cognitive tasks.


Researchers have also proposed a slew of other hypotheses to explain individual variation in human intelligence. The variety of proposed mechanisms underlines the scientific uncertainty about just how intelligence arises. Below are three of these hypotheses, each backed by experimental evidence and computational modeling:

See full infographic: WEB | PDF

Miller suspects that these waves “direct traffic” in the brain, ensuring that neural signals reach the appropriate neurons when they need to. “Gamma is bottom-up—it carries the contents of what you’re thinking about. And beta is top-down—it carries the control signals that determine what you think about,” he says. “If your beta isn’t strong enough to control the gamma, you get a brain that can’t filter out distractions.”

The overall pattern of brain communications is another candidate to explain intelligence. Earlier this year, Aron Barbey, a psychology researcher at the University of Illinois at Urbana-Champaign, proposed this idea, which he calls the network neuroscience theory,5 citing studies that used techniques such as diffusion tensor MRI to trace the connections among brain regions. Barbey is far from the first to suggest that the ability of different parts of the brain to communicate with one another is central to intelligence, but the whole-brain nature of network neuroscience theory contrasts with more established models, such as parieto-frontal integration theory, that focus on specific regions. “General intelligence originates from individual differences in the system-wide topology and dynamics of the human brain,” Barbey tells The Scientist.

General intelligence orig­inates from individual differences in the system-wide topology and dynamics of the human brain.

—Aron Barbey, University of Illinois at Urbana-Champaign
Emiliano Santarnecchi of Harvard University and Simone Rossi of the University of Siena in Italy also argue that intelligence is a property of the whole brain, but they see overall plasticity as the key to smarts. Plasticity, the brain’s ability to reorganize, can be measured via the nature of the brain activity generated in response to transcranial magnetic or electrical stimulation, Santarnecchi says. “There are individuals that generate a response that is only with the other nodes of the same network that we target,” he says.And then there are people in whose brains “the signal starts propagating everywhere.” His group has found that higher intelligence, as measured by IQ tests, corresponds with a more network-specific response, which Santarnecchi hypothesizes “reflects some sort of. . . higher efficiency in more-intelligent brains.”

Despite the hints uncovered about how intelligence comes about, Santarnecchi finds himself frustrated that research has not yielded more-concrete answers about what he considers one of neuroscience’s central problems. To address that shortcoming, he’s now spearheading a consortium of cognitive neuroscientists, engineers, evolutionary biologists, and researchers from other disciplines to discuss approaches for getting at the biological basis of intelligence. Santarnecchi would like to see manipulations of the brain—through noninvasive stimulation, for example—to get at causal relationships between brain activity and cognitive performance. “We know a lot now about intelligence,” he says, “But I think it’s time to try to answer the question in a different way.”

Putting the g in genes
As neuroscientists interrogate the brain for how its structure and activity relate to intelligence, geneticists have approached intelligence from a different angle. Based on what they’ve found so far, psychology researcher Sophie von Stumm of the London School of Economics estimates that about 25 percent of individual variation in intelligence will turn out to be explained by single nucleotide polymorphisms in the genome.

To find genes at play in intelligence, researchers have scanned the genomes of thousands of people. Earlier this year, for example, economist Daniel Benjamin of the University of Southern California and colleagues crunched data on upwards of 1.1 million subjects of European descent and identified more than 1,200 sites in the genome associated with educational attainment, a common proxy for intelligence.7 Because subjects in many types of medical studies in which DNA is sequenced are asked about their educational status to help control for socioeconomic factors in later analyses, such data are plentiful. And while the correlation between education and intelligence is imperfect, “intelligence and school achievement are highly correlated, and genetically very highly correlated,” says von Stumm, who recently coauthored a review on the genetics of intelligence.8 Altogether, the genes identified so far accounted for about 11 percent of individual variation in education level in Benjamin’s study; household income, by comparison, explained 7 percent.

Such genome-wide association studies (GWAS) have been limited in what they reveal about the biology at work in intelligence and educational attainment, as much remains to be learned about the genes thus far identified. But there have been hints, says Benjamin. For example, the genes with known functions that turned up in his recent study “seem to be involved in pretty much all aspects of brain development and neuron-to-neuron communication, but not glial cells,” Benjamin says. Because glial cells affect how quickly neurons transmit signals to one another, this suggests that firing speed is not a factor in differences in educational attainment.

Other genes seem to link intelligence to various brain diseases. For example, in a preprint GWAS published last year, Danielle Posthuma of VU University Amsterdam and colleagues identified associations between cognitive test scores and variants that are negatively correlated with depression, ADHD, and schizophrenia, indicating a possible mechanism for known correlations between intelligence and lower risk for mental disorders. The researchers also found intelligence-associated variants that are positively correlated with autism.9

Von Stumm is skeptical that genetic data will yield useful information in the near term about how intelligence results from the brain’s structure or function. But GWAS can yield insights into intelligence in less direct ways. Based on their results, Benjamin and colleagues devised a polygenic score that correlates with education level. Although it’s not strong enough to be used to predict individuals’ abilities, Benjamin says the score should prove useful for researchers, as it enables them to control for genetics in analyses that aim to identify environmental factors that influence intelligence. “Our research will allow for better answers to questions about what kinds of environmental interventions improve student outcomes,” he says.

Von Stumm plans to use Benjamin’s polygenic score to piece together how genes and environment interact. “We can test directly for the first time,” says von Stumm, “if children who grow up in impoverished families. . . with fewer resources, if their genetic differences are as predictive of their school achievement as children who grow up in wealthier families, who have all the possibilities in the world to grab onto learning opportunities that suit their genetic predispositions.”

Upping IQ

The idea of manipulating intelligence is enticing, and there has been no shortage of efforts to do just that. One tactic that once seemed to hold some promise for increasing intelligence is the use of brain-training games. With practice, players improve their performance on these simple video games, which rely on skills such as quick reaction time or short-term memorization. But reviews of numerous studies found no good evidence that such games bolster overall cognitive abilities, and brain training of this kind is now generally considered a disappointment.

Transcranial brain stimulation, which sends mild electrical or magnetic pulses through the skull, has shown some potential in recent decades for enhancing intelligence. In 2015, for example, neurologist Emiliano Santarnecchi of Harvard Medical School and colleagues found that subjects solved puzzles faster with one type of transcranial alternating current stimulation, while a 2015 meta-analysis found “significant and reliable effects” of another type of electrical stimulation, transcranial direct current stimulation (Curr Biol, 23:1449–53).

While magnetic stimulation has yielded similarly enticing results, studies of both electrical and magnetic stimulation have also raised doubts about the effectiveness of these techniques, and even researchers who believe they can improve cognitive performance admit that we’re a long way from using them clinically.

See “Noninvasive Brain Stimulation Modulates Memory Networks”

One proven way researchers know to increase intelligence is good old-fashioned education. In a meta-analysis published earlier this year, a team led by then University of Edinburgh neuropsychologist Stuart Ritchie (now at King’s College London) sifted out confounding factors from data reported in multiple studies and found that schooling—regardless of age or level of education—raises IQ by an average of one to five points per year (Psychol Sci, 29:1358–69). Researchers, including University of British Columbia developmental cognitive neuroscientist Adele Diamond, are working to understand what elements of education are most beneficial to brains.

“Intelligence is predictive of a whole host of important things,” such as educational attainment, career success, and physical and mental health, Ritchie writes in an email to The Scientist, “so it would be extremely useful if we had reliable ways of raising it.”
Thinking about thinking
It’s not just the biology of intelligence that remains a black box; researchers are still trying to wrap their minds around the concept itself. Indeed, the idea that g represents a singular property of the brain has been challenged. While g’s usefulness and predictive power as an index is widely accepted, proponents of alternative models see it as an average or summation of cognitive abilities, not a cause.

Last year, University of Cambridge neuro-scientist Rogier Kievit and colleagues published a study that suggests IQ is an index of the collective strength of more-specialized cognitive skills that reinforce one another. The results were based on vocabulary and visual reasoning test scores for hundreds of UK residents in their late teens and early 20s, and from the same subjects about a year and a half later. With data on the same people at two time points, Kievit says, the researchers could examine whether performance on one cognitive skill, such as vocabulary or reasoning, could predict the rate of improvement in another domain. Using algorithms to predict what changes should have occurred under various models of intelligence, the researchers concluded that the best fit was mutualism, the idea that different cognitive abilities support one another in positive feedback loops.10

In 2016, Andrew Conway of Claremont Graduate University in California and Kristóf Kovács, now of Eötvös Loránd University in Hungary, made a different argument for the involvement of multiple cognitive processes in intelligence.11 In their model, application-specific neural networks—those needed for doing simple math or navigating an environment, for example—and high-level, general-purpose executive processes, such as breaking down a problem into a series of small, manageable blocks, each play a role in helping a person complete cognitive tasks. It’s the fact that a variety of tasks tap into the same executive processes that explains why individuals’ performance on disparate tasks correlates, and it’s the average strength of these higher-order processes, not a singular ability, that’s measured by g, the researchers argue. Neuroscientists might make more progress in understanding intelligence by looking for the features of the brain that carry out particular executive processes, rather than for the seat of a single g factor, Kovács says.

As researchers grapple with the intractable phenomenon of intelligence, a philosophical question arises: Is our species smart enough to understand the basis of our own intelligence? While those in the field generally agree that science has a long way to go to make sense of how we think, most express cautious optimism that the coming decades will yield major insights.

“We see now the development, not only of mapping brain connections in human beings . . . we’re also beginning to see synapse mapping,” Haier says. “This will take our understanding of the basic biological mechanisms of things like intelligence . . . to a whole new level.”

This article was originally published in NIH. Read the original article.

How can you eat dairy if you lack the gene for digesting it? Fermented milk may be key, ancient Mongolian study suggests

How can you eat dairy if you lack the gene for digesting it? Fermented milk may be key, ancient Mongolian study suggests

More than 3000 years ago, herds of horses, sheep, and cows or yaks dotted the steppes of Mongolia. Their human caretakers ate the livestock and honored them by burying animal bones with their own. Now, a cutting-edge analysis of deposits on ancient teeth shows that early Mongolians milked their animals as well. That may not seem surprising. But DNA analysis of the same ancient individuals shows that as adults they lacked the ability to digest lactose, a key sugar in milk.

The findings present a puzzle, challenging an oft-told tale of how lactose tolerance evolved. From other studies, “We know now dairying was practiced 4000 years before we see lactase persistence,” says Christina Warinner of the Max Planck Institute for the Science of Human History (MPI-SHH) in Jena, Germany. “Mongolia shows us how.”

As University of Copenhagen paleoproteomicist Matthew Collins, who was not on the team, puts it, “We thought we understood everything, but then we got more data and see how naïve we were.”

Most people in the world lose the ability to digest lactose after childhood. But in pastoralist populations, the story went, culture and DNA changed hand in hand. Mutations that allowed people to digest milk as adults—an ability known as lactase persistence—would have given their carriers an advantage, enabling them to access a rich, year-round source of fat and protein. Dairying spread along with the adaptation, explaining why it is so common in herding populations in Europe, east and north Africa, and the Middle East.

But a closer look at cultural practices around the world has challenged that picture. In modern Mongolia, for example, traditional herders get more than a third of their calories from dairy products. They milk seven kinds of mammals, yielding diverse cheeses, yogurts, and other fermented milk products, including alcohol made from mare’s milk. “If you can milk it, they do in Mongolia,” Warinner says. And yet 95% of those people are lactose intolerant.

Warinner wondered whether dairying was a recent development in Mongolia or whether early Mongolians had lactase persistence and then lost it in a population turnover. Ancient people in the region might have picked up such mutations from the Yamnaya herders—about a third of whom were lactase persistent—and who swept east and west from the steppes of central Eurasia 5000 years ago.

To find answers, she and her team analyzed human remains from six sites in northern Mongolia that belonged to the Deer Stone-Khirigsuur Complex, a culture that between 1300 and 900 B.C.E. built burial mounds marked with standing stones. Because those nomads rarely built permanent structures, and constant winds strip away the soil along with remains such as pot fragments and trash pits, archaeological evidence for diet is scarce. So MPI-SHH researcher Shevan Wilkin took dental calculus—the hard plaque that builds up on teeth—from nine skeletons and tested it for key proteins. “Proteomics on calculus is one of the few ways you can get at diet without middens or hearths,” Warinner says.

The calculus yielded milk proteins from sheep, goats, and bovines such as yak or cow. Yet analysis of DNA from teeth and leg bones showed the herders were lactose intolerant. And they carried only a trace of DNA from the Yamnaya, as the team reports in a paper published this week in the Proceedings of the National Academy of Sciences (PNAS). “They’re exploiting these animals for dairying even though they’re not lactase persistent,” Collins says.

That disconnect between dairy and DNA isn’t limited to Mongolia. Jessica Hendy, a co-author of the PNAS paper, recently found milk proteins on pots at Çatalhöyük in Turkey, which at 9000 years old dates to the beginnings of domestication, 4 millennia before lactase persistence appears. “There seem to be milk proteins popping up all over the place, and the wonderful cultural evolution we expected to see isn’t happening,” Collins says.

Modern Mongolians digest dairy by using bacteria to digest lactose for them, turning milk into yogurt and cheese, along with a rich suite of dairy products unknown in the Western diet. Ancient pastoralists may have adopted similar strategies. “Control and manipulation of microbes is core to this whole transformation,” Warinner says. “There’s an intense control of microbes inside and outside their bodies that enables them to have a dairying culture.”

Geneticists who once regarded lactase persistence and dairying as closely linked are going back to the drawing board to understand why the adaptation is common—and apparently selected for—in some dairying populations but totally absent in others. “Why is there a signal of natural selection at all if there was already a cultural solution?” asks Joachim Burger, a geneticist at Johannes Gutenberg University in Mainz, Germany, who was not part of the study.

How dairying reached Mongolia is also a puzzle. The Yamnaya’s widespread genetic signature shows they replaced many European and Asian populations in the Bronze Age. But they seem to have stopped at the Altai Mountains, to the west of Mongolia. “Culturally, it’s a really dynamic period, but the people themselves don’t seem to be changing,” Warinner says. She thinks even though the Yamnaya didn’t contribute their genes to East Asia, they did spread their culture, including dairying. “It’s a local population that has adopted the steppe way of life.”

The study’s surprising results have given Warinner her next goal: to understand how Mongolians and other traditional dairying cultures harnessed microbes to digest milk and render lactose tolerance irrelevant—and to figure out which of hundreds of kinds of bacteria make the difference.

This article was originally published in Science. Read the original article.

Time-restricted eating can overcome the bad effects of faulty genes and unhealthy diet

Time-restricted eating can overcome the bad effects of faulty genes and unhealthy diet

Everything in our body is programmed to run on a 24-hour or circadian time table that repeats every day. Nearly a dozen different genes work together to produce this 24-hour circadian cycle. These clocks are present in all of our organs, tissues and even in every cell. These internal clocks tell us when to sleep, eat, be physically active and fight diseases. As long as this internal timing system work well and we obey them, we stay healthy.

But what happens when our clocks are broken or begin to malfunction?

Mice that lack critical clock genes are clueless about when to do their daily tasks; they eat randomly during day and night and succumb to obesity, metabolic disease, chronic inflammation and many more diseases.

Even in humans, genetic studies point to several gene mutations that compromise our circadian clocks and make us prone to an array of diseases from obesity to cancer. When these faulty clock genes are combined with an unhealthy diet, the risks and severity of these diseases skyrocket.

My lab studies how circadian clocks work and how they readjust when we fly from one time zone to another or when we switch between day and night shift. We knew that the first meal of the day synchronizes our circadian clock to our daily routine. So, we wanted to learn more about timing of meals and the implications for health.

Time-restricted eating

Chicken clock UC San Diego
Eating within an eight- to 12-hour window could diminish the impact of a bad diet and a broken body clock.
Credit: amornchaijj/

A few years ago we made a surprising discovery that when mice are allowed to eat within a consistent eight- to 12-hour period without reducing their daily caloric intake, they remain healthy and do not succumb to diseases even when they are fed unhealthy food rich in sugar or fat.

The benefit surpasses any modern medicine. Such an eating pattern – popularly called time restricted eating – also helps overweight and obese humans reduce body weight and lower their risk for many chronic diseases.

Decades of research had taught us what and how much we eat matters. But the new discovery about when we eat matters raised many questions.

How does simply restricting your eating times alter so many elements of personal health? The timing of eating is like an external time cue that signals the internal circadian clock to keep a balance between nourishment and repair. During the eating period, metabolism was geared toward nourishment. The gut and liver better absorbed nutrients from food, and used some for fueling the body while storing the rest.

During the fasting period, metabolism switched to rejuvenation. Unwanted chemicals were broken down, stored fat was burned and damaged cells were repaired. The next day, after the first bite, the switch flipped from rejuvenation to nourishment. This rhythm continued every day. We thought that timing of eating and fasting was giving cues to the internal clock and the clock was flipping the switch between nourishment and rejuvenation every day. However, it was not clear if a normal circadian clock was necessary to mediate the benefits of time restricted eating or whether just restricted eating times alone could flip the daily switch.

Man in chair eating pizza at night
Eating late at night can disrupt circadian rhythms and raise the risk of chronic diseases including obesity.
Credit: Ulza/

What if you have a broken internal clock?

In a new study, we took genetically engineered animals that lacked a functioning circadian clock either in the liver or in every cell of the body.

These mice, with faulty clocks don’t know when to eat and when to stay away from food. So, they eat randomly and develop multiple diseases. The disease severity increases if they are fed an unhealthy diet.

To test if time restricted eating works with a damaged or dysfunctional clock, we simply divided these mutant mice into two different groups – one group got to eat whenever they wanted and the other group was only given access to food during restricted times. Both groups ate the identical number of calories, but the restricted eaters finished their daily ration within nine to 10 hours.

We thought that even though these mice had restricted eating times, having the bad clock gene would doom them to obesity and many metabolic diseases. But to our utter surprise the restricted eating times trumped the bad effects of faulty clock genes. The mice without a functioning clock that were destined to be morbidly sick, were as healthy as normal mice when they consumed food during a certain period.

The results have many implications for human health.

The good news

First of all, it raises a big question: What is the connection between our genetically encoded circadian clock timing system and external time of eating? Do these two different timing systems work together like co-pilots in a plane, so that even if one is incapacitated, the other one can still fly the plane?

Deep analyses of mice in our experiment revealed that time restricted eating triggers many internal programs that improve our body’s resilience — enabling us to fight off any unhealthy consequences of bad nutrition or any other stress. This boost in internal resilience may be the key to these surprising health benefits.

Man in chair eating pizza at night
As we age, our body clocks become less accurate, and we become more prone to chronic diseases. Keeping regular, restricted eating times can keep us healthy longer.
Credit: LightField Studios/

For human health the message is simple, as I say in my new book “The Circadian Code.” Even if we have faulty circadian genes as in many congenital diseases, such as Prader-Willi syndrome or Smith-Magenis syndrome, or carry a malfunctioning copy of nearly a dozen different clock genes, as long as we have some discipline and restrict eating times, we can still fend off the bad effects of bad genes.

Similarly, other researchers have shown as we get older our circadian clock system weakens. The genes don’t function correctly so our sleep-wake cycles are disrupted — just as if we had a faulty clock. So, lifestyle becomes more important for older people who are at higher risk for many chronic diseases such as diabetes, heart disease, high cholesterol, fatty liver disease and cancer.

As a potential translation to human health, we have created a website where anyone from anywhere in the world can sign up for an academic study and download a free app called MyCircadianClock and start self-monitoring the timing of eating and sleeping.

Diagram of clock app
Research has shown that our daily eating, sleeping and activity patterns can affect health and determine our long-term risk for various diseases. This app is part of a research project that uses smartphones to advance research into biological rhythms in the real world, while also helping you understand your body’s rhythms.
Credit: BY-SA

The app provides tips and guidance on how to adopt a time restricted eating lifestyle to improve health and prevent or manage chronic diseases. By collecting data from people with varying risk for disease, we can explore how eating times can help to increase our healthy lifespan.

We understand everyone’s lifestyle around home, work and other responsibilities is unique and one size may not fit all. So, we hope people can use the app and some tips to build their personalized circadian routine. By selecting their own time window of eight to 12 hours for eating that best fits their lifestyle, they may reap many health benefits.



This article was originally published in NIH. Read the original article.

This broken gene may have turned our ancestors into marathoners—and helped humans conquer the world

This broken gene may have turned our ancestors into marathoners—and helped humans conquer the world

Despite our couch potato lifestyles, long-distance running is in our genes. A new study in mice pinpoints how a stretch of DNA likely turned our ancestors into marathoners, giving us the endurance to conquer territory, evade predators, and eventually dominate the planet.

“This is very convincing evidence,” says Daniel Lieberman, a human evolutionary biologist at Harvard University who was not involved with the work. “It’s a nice piece of the puzzle about how humans came to be so successful.”

Human ancestors first distinguished themselves from other primates by their unusual way of hunting prey. Instead of depending on a quick spurt of energy—like a cheetah—they simply outlasted antelopes and other escaping animals, chasing them until they were too exhausted to keep running. This ability would have become especially useful as the climate changed 3 million years ago, and forested areas of Africa dried up and became savannas. Lieberman and others have identified skeletal changes that helped make such long-distance running possible, like longer legs. Others have also proposed that our ancestors’ loss of fur and expansion of sweat glands helped keep these runners cool.

Still, scientists don’t know much about the cellular changes that gave us better endurance, says Herman Pontzer, an evolutionary anthropologist at Duke University in Durham, North Carolina, who was not involved with the work.

Some clues came 20 years ago, when Ajit Varki, a physician-scientist at the University of California, San Diego (UCSD), and colleagues unearthed one of the first genetic differences between humans and chimps: a gene called CMP-Neu5Ac Hydroxylase (CMAH). Other primates have this gene, which helps build a sugar molecule called sialic acid that sits on cell surfaces. But humans have a broken version of CMAH, so they don’t make this sugar, the team reported. Since then, Varki has implicated sialic acid in inflammation and resistance to malaria.

In the new study, Varki’s team explored whether CMAH has any impact on muscles and running ability, in part because mice bred with a muscular dystrophy–like syndrome get worse when they don’t have this gene. UCSD graduate student Jonathan Okerblom put mice with a normal and broken version of CMAH (akin to the human version) on small treadmills. UCSD physiologist Ellen Breen closely examined their leg muscles before and after running different distances, some after 2 weeks and some after 1 month.

After training, the mice with the human version of the CMAH gene ran 12% faster and 20% longer than the other mice, the team reports today in the Proceedings of the Royal Society B. “Nike would pay a lot of money” for that kind of increase in performance in their sponsored athletes, Lieberman says.

The team discovered that the “humanized” mice had more tiny blood vessels branching into their leg muscles, and—even when isolated in a dish—the muscles kept contracting much longer than those from the other mice. The humanlike mouse muscles used oxygen more efficiently as well. But the researchers still have no idea how the sugar molecule affects endurance, as it serves many functions in a cell.

Similar improvements probably benefitted our human ancestors, says Andrew Best, a biological anthropology graduate student at the University of Massachusetts (UMass) in Amherst, who was not involved with the work. Varki’s team calculated that this genetic change happened 2 million to 3 million years ago, based on the genetic differences among primates and other animals.

That’s “slightly earlier than I’d have expected for such a large shift in [endurance],” says Best, as it predates some of the skeletal modifications, which don’t show up in the fossil record until much later. But to Pontzer, the date makes sense, as these ancestors needed endurance for walking and for digging up food. “Maybe it’s more than about running,” he notes.

However, “Mice are not humans or primates,” says Best’s adviser at UMass, Jason Kamilar, a biological anthropologist also not involved with the new work. “The genetic mechanisms in mice may not necessarily translate to humans or other primates.”

Either way, says Pontzer, the study is exciting because it gets researchers looking beyond fossils and into what might actually have gone on in the bodies of ancient animals. “This is really energizing work; it tells us how much is out there to do.”



This article was originally published in Science. Read the original article.

Language Gene Dethroned

Language Gene Dethroned

The gene FOXP2, known to be important to language ability, has not undergone strong selection in humans over the past few hundred thousand years, a new genomic analysis finds. The results, published yesterday (August 2) in Cell, overturn those of a 2002 study that found evidence of a rapid and recent spread of a FOXP2 variant through human populations.

Defects in FOXP2 were discovered in a family with multiple members who had speech impairments, and the gene became known for its importance in language ability. By appearing to show that speech-friendly variants of the gene had swept through the human population relatively recently, the 2002 study fed a popular idea that the gene was key to the evolution of language and for setting Homo sapiens apart from other animals. Subsequent studies have found “human” FOXP2 variants in Neanderthals and Denisovans, however.

With the release of the new results, “It’s good that it is now clear there is actually no sweep signal at FOXP2,” Wolfgang Enard of Ludwig Maximilian University of Munich in Germany, a coauthor of the 2002 study, tells Nature.

Enard and his coauthors based their original study on just 20 people, Nature notes, a small number of whom had African ancestry. The new analysis used much larger, more diverse datasets, and found no evidence of recent selection pressure on FOXP2. (The results were inconclusive for signs of selection prior to 200,000 years ago.) The authors of the new paper suggest that the 2002 study’s contradictory results may be explained by its sample’s small size and lack of diversity.

“If you’re asking a question about the evolution of humans as a species,” Elizabeth Atkinson, a population geneticist at the Broad Institute of Harvard and MIT and a coauthor of the new paper, tells Nature, “you really do need to include a diverse set of people.”

Do elite ‘power sport’ athletes have a genetic advantage?

Do elite 'power sport' athletes have a genetic advantage?

A specific gene variant is more frequent among elite athletes in power sports, reports a study in the October issue of The Journal of Strength and Conditioning Research, official research journal of the National Strength and Conditioning Association (NSCA). The journal is published by Lippincott Williams & Wilkins, a part of Wolters Kluwer Health.

A “functional polymorphism” of the angiotensiogen (AGT) gene is two to three times more common in elite power athletes, compared to nonathletes or even elite endurance athletes, according to the new research by Paweł Cięszczyk, PhD, of University of Szczecin, Poland, and colleagues. They write, “[T]he M23T variant in the AGT may be one of the genetic markers to investigate when an assessment of predisposition to power sports is made.”

Gene Variant More Common in Elite Power Athletes

The researchers analyzed DNA samples from two groups of elite Polish athletes: 100 power-oriented athletes, from sports such as power-lifting, short-distance runners, and jumpers; and 123 endurance athletes, such as long-distance runners and swimmers and rowers. All athletes competed at the international level — eg, World and European Championships, World Cups, or Olympic Games. A group of 344 nonathletes were studied for comparison.

The analysis focused on the genotype of the M235T polymorphism of the gene AGT. “Polymorphisms” are genes that can appear in two different forms (alleles). A previous study found that the “C” allele of the AGT gene (as opposed to the “T” allele) was more frequent among elite athletes in power sports.

The genetic tests found that elite power athletes were more likely to have two copies of the C allele — in other words, they inherited the C allele from both parents. This “CC” genotype was found 40 percent of the power athletes, compared to 13 percent of endurance athletes and 18 percent of nonathletes.

Power athletes were three times more likely to have the CC genotype compared to endurance athletes, and twice as likely compared to nonathletes. At least one copy of the C allele was present in 55.5 percent of power athletes, compared to about 40 percent of endurance athletes and nonathletes.

But Functional Significance Not Yet Clear

In a further analysis, the researchers found no differences in genotype between “top-elite” athletes who had won medals in international-level competition, compared to elite-level athletes who were not medalists.

The new study is the first to replicate previous, independent research showing an increased rate of the CC genotype of the AGT gene among power athletes on Spanish national teams. That study also found about a 40 percent prevalence of the CC genotype among elite power athletes.

The AGT gene is part of the renin-angiotensin system, which plays essential roles in regulating blood pressure, body salt, and fluid balance. There are several possible ways in which the CC genotype might predispose to improved power and strength capacity — including increased production of angiotensin II, which is crucial for muscle performance. However, the researchers emphasize that the “functional consequences” of the M235T polymorphism remain to be determined.

The study contributes to the rapidly evolving body of research on genetic factors related to exercise, fitness, and performance — which may one day have implications for identification and training of potential elite-level athletes. Dr Cięszczyk and coauthors conclude, “Identifying genetic characteristics related to athletic excellence or individual predisposition to types of sports with different demands (power or endurance oriented) or even sport specialty may be decisive in recognizing athletic talent and probably will allow for greater specificity in steering of sports training programs.”

This article was originally published in NIH. Read the original article.

Coffee Drinkers Are More Likely To Live Longer. Decaf May Do The Trick, Too

Coffee Drinkers Are More Likely To Live Longer. Decaf May Do The Trick, Too

Coffee is far from a vice.

There’s now lots of evidence pointing to its health benefits, including a possible longevity boost for those of us with a daily coffee habit.

The latest findings come from a study published Monday in JAMA Internal Medicine that included about a half-million people in England, Scotland and Wales. Participants ranged in age from 38 to 73.

“We found that people who drank two to three cups per day had about a 12 percent lower risk of death compared to non-coffee drinkers” during the decade-long study, says Erikka Loftfield, a research fellow at the National Cancer Institute.

This was true among all coffee drinkers — even those who were determined to be slow metabolizers of caffeine. (These are people who tend to be more sensitive to the effects of caffeine.) And the association held up among drinkers of decaffeinated coffee, too.

Drink To Your Health: Study Links Daily Coffee Habit To Longevity
Drink To Your Health: Study Links Daily Coffee Habit To Longevity
In the U.S., there are similar findings linking higher consumption of coffee to a lower risk of early death in African-Americans, Japanese-Americans, Latinos and white adults, both men and women. A daily coffee habit is also linked to a decreased risk of stroke and Type 2 diabetes.

What is it about coffee that may be protective? It’s not likely to be the caffeine. While studies don’t prove that coffee extends life, several studies have suggested a longevity boost among drinkers of decaf as well as regular coffee.

So, researchers have turned their attention to the bean.

How Many Cups Of Coffee Per Day Are Too Many?
How Many Cups Of Coffee Per Day Are Too Many?
“The coffee bean itself is loaded with many different nutrients and phyto-chemicals,” nutrition researcher Walter Willett of the Harvard School of Public Health told us in 2015. These compounds include lignans, quinides and magnesium, some of which may help reduce insulin resistance and inflammation. “My guess is that they’re working together to have some of these benefits,” Willett said. (He’s the author of a study that points to a 15 percent lower risk of early death among men and women who drink coffee, compared with those who do not consume it.)

“Coffee, with its thousand chemicals, includes a number of polyphenol-like, antioxidant-rich compounds,” says Christopher Gardner, who directs nutrition studies at the Stanford Prevention Research Center. He says there’s so much evidence supporting the idea that coffee can be a healthy part of your diet, it’s now included in the U.S. Dietary Guidelines. In 2015, the experts behind the guidelines concluded that a daily coffee habit may help protect against Type 2 diabetes and cardiovascular disease.

Gardner says part of the benefit of coffee may be linked to something profoundly simple: It brings people joy.

“Think about when you’re drinking coffee — aren’t you stopping and relaxing a little bit?” Gardner asks.

He says it’s such a simple pleasure. “I just love holding that hot beverage in my hand. It’s the morning ritual,” he says. He drinks at least three cups a day.

So, how did coffee achieve such an image makeover? It wasn’t too long ago that coffee was considered a vice.

Gardner says the bad rap goes back to a time when people who drank coffee were also very likely to smoke cigarettes.

So, when earlier epidemiological studies suggested that coffee consumption was associated with health risks, researchers were thrown off. It wasn’t until they separated these two habit apart that a completely different picture emerged.

“Smoking was the cause of the association,” Gardner says. “Ever since they disentangled smoking, coffee wasn’t just null, it was [shown to be] beneficial.”

This article was originally published in NPR. Read the original article.