The Difference Between Night and Day in Cognition by Sonny

The Difference Between Night and Day in Cognition

With few exceptions, every cell in the body has its own internal clock, responds to the Earth’s 24-hour day-night cycling—circadian rhythms. Cells whose activity determines an organ’s function are often timed differently, but they all respond like clockwork to the arrival and departure of sunlight as directed by the brain. It’s an amazing chemical feat of coordination, first discovered in cyanobacteria, tiny “quasi-animals” that rely on plant-like photosynthesis. The chemical and genetic details are quite well known for such a complicated system, but they boil down to atomic movements synchronized with Earth’s rotation around the sun (see Dierickx, 2018 for a review). Those details are beyond the scope of this commentary, but, suffice it to say, these circadian rhythms also affect behavior. For instance, humans are diurnal and don’t normally eat at night because secretion of glucose from the liver and it’s “offsetting” hormone, pancreatic insulin, are coordinated with solar cycles, which balance (or imbalance) is transmitted to brain regions directing the drive for food. The brain’s central pacemaker, the “master conductor” the directs this cycling, is the suprachiasmatic nucleus (SCN), a cluster of neurons with direct connections to the retina’s light sensitive cells. The SCN’s output goes to virtually every part of the brain and body.

Importantly, gene expression, the first step in producing proteins that guide every cell’s function, also experiences circadian rhythms (see Kim, 2018, to learn how this is done). As a result, medical science pays attention to circadian rhythms for drug effectiveness; take them in the morning or at night. Most of the drugs known to be so affected by circadian rhythms are directly related to metabolism (e.g., statins and heart medications of various sorts), thus, it’s still unclear if the efficacy of “behavior-managing drugs” like selective serotonin inhibitors (SSRIs) are similarly affected by circadian rhythms (De Giorgi, 2013). I suspect they are because “82.2% of genes coding for proteins that have been identified as druggable targets by the U.S. Food and Drug Administration show cyclic changes in transcription in at least one tissue” (Mure, 2018). Additionally, drugs affecting melatonin production are sensitive to circadian rhythms (Simko, 2009), melatonin is produced by gut bacteria (Anderson & Maes, 2015), and melatonin is a major factor impacting the gut-brain-axis (what you eat impacts your behavior). Thus, the evidence points in favor of my suspicions that drugs affecting behavior should, like statins and heart drugs, be timed to circadian rhythms.

The chemical and genetic foundations of circadian rhythms come from studies in cyanobacteria, and they’re the same throughout the animal kingdom, but most knowledge of the SCN comes from experiments on mice. Mice are nocturnal animals, so we’ve assumed that the body’s pacemaker activities in diurnal animals like humans are simply shifted by 12 hours. We are probably wrong, or so conclude Mure and colleagues, who performed their experiments on diurnal baboons. Not only did their research suggest that this “12-hour shift” assumption regarding gene expression is incorrect, but it was different than the mouse on a body region-by-region basis. What struck me as especially important to the evolution of human cognition was what they observed about circadian gene expression in the cerebellum, one of those regions differing between nocturnal mice and diurnal baboons.

I conceptualize the cerebellum as a region specialized in “simple” computations utilized by all other brain regions, thus acting analogous to a “parallel computer” in expanding the computational power of functionally specialized brain regions. The cerebellum’s computational power is probably accounted for by the sheer number of its small, uncomplicated granular neurons, comprising about 75% of the brain’s total neurons. Even small neurons are voracious energy consumers, so this probably means the cerebellum is more sensitive to solar cycles than other brain regions. Not surprisingly, Mure and colleagues observed a “quiescent period” in genes transcribed (active) in the cerebellum. The set of genes active in a cell is its “transcriptome,” and the same type of neuron in different brain regions has a different transcriptome, bespeaking the region’s specialized function. That is, Mure and colleagues observed that in the first half of the night, cerebellar gene expression is “slowed down,” suggesting it is less capable of performing its most powerful calculations during this time. Metaphorically, the cerebellum is announcing, “Sorry, other brain regions, I’m off line now.” This reconciles with the belief that memory consolidation occurs during sleep, which consolidation presumably requires less energy. Thus, I suspect the brain’s computational power, and, subsequently, its cognitive capacity, is suboptimal during the first half of nighttime, important information for late hour work and ancient humans.


Mure, L. S., Le, H. D., Benegiamo, G., Chang, M. W., Rios, L., Jillani, N., … & Panda, S. (2018). Diurnal transcriptome atlas of a primate across major neural and peripheral tissues. Science, eaao0318.

De Giorgi, A., Menegatti, A. M., Fabbian, F., Portaluppi, F., & Manfredini, R. (2013). Circadian rhythms and medical diseases: does it matter when drugs are taken?. European journal of internal medicine, 24(8), 698-706.

Simko, F., & Pechanova, O. (2009). Potential roles of melatonin and chronotherapy among the new trends in hypertension treatment. Journal of pineal research, 47(2), 127-133.

Anderson, G., & Maes, M. (2015). The gut–brain axis: The role of melatonin in linking psychiatric, inflammatory and neurodegenerative conditions. Advances In Integrative Medicine, 2(1), 31-37.

Kim, Y. H., Marhon, S. A., Zhang, Y., Steger, D. J., Won, K. J., & Lazar, M. A. (2018). Rev-erbα dynamically modulates chromatin looping to control circadian gene transcription. Science, eaao6891.

Dierickx, P., Van Laake, L. W., & Geijsen, N. (2018). Circadian clocks: from stem cells to tissue homeostasis and regeneration. EMBO reports, 19(1), 18-28.



Social scientists claim to have demonstrated that bullying in early adolescence causes behavioral problems in late adolescence and adulthood, like depression, anxiety, anti-social behavior (see Juvonen & Graham, 2014 and Arseneault, 2017) and, according to a recent paper (Earnshaw, 2018), substance abuse. In response, “anti-bullying” campaigns have sprung up across the nation. I contend that “bullying research” is fatally flawed and “anti-bullying” campaigns are excessive, are hurting our children by creating unreasonable expectations and feelings of entitlement. I’ll focus on Earnshaw and colleagues’ paper because examples are usually the best way to make a general point.

Earnshaw measures one variable, “peer victimization in early adolescence,” and determines its relationship to two other variables, depression and substance abuse. “Structural equation modeling” is used to evaluate the relationship between variables to support a hypothesis that an explanatory (independent) variable, bullying, is causal of the response (dependent) variables, depression and substance abuse. Of course, “causation” cannot be assumed because that requires manipulation of variables, which is obviously impossible here. However, a strong statistical correlation between an explanatory and response variable is usually a good indicator of causation. The overarching purpose is laudable, as all parents and behavioral scientists want to determine why some young adults but not all young adults are depressed substance abusers. Is it early adolescent peer victimization, or bullying? That is, if we prevent adolescent bullying, will young and older adults be less likely to suffer depression, anxiety, substance abuse, etc.?

Most definitions of bullying paraphrase the CDC: “any unwanted aggressive behavior(s) by another youth or group of youths, who are not siblings or current dating partners, that involves an observed or perceived power imbalance. These behaviors are repeated multiple times or are highly likely to be repeated” (CDC, 2018).

With that background, I jump into the fatal flaw in bullying research: the collected data has low external reliability but results usually suggest high confidence. Earnshaw concludes, “our results from 3 US metropolitan areas show that youth who experience more frequent peer victimization in the fifth grade are more likely to engage in alcohol, marijuana, and tobacco use in the tenth grade.” Most bullying research suffers these problems: 1) self-reports, especially when querying youngsters, have very low external reliability, but data gleaned from self-reports is entered as either cardinal or ordinal values; 2) beginning behavioral states are seldom measured, such as a preexisting conditions affecting the response variable; 3) sample populations consist of volunteers, are convenience samples; and 4) attrition in longitudinal studies are very high (16.5% of Earnshaw’s sample dropped out).

The third and fourth problems should be self-evident, but bullying research usually downplays or ignores them. As to the first problem, it boils down to the adage, “garbage in, garbage out.” Take another look at the CDC’s definition of bullying. Questions posed to fifth graders by Earnshaw is typical: “How often do kids kick or push you in a mean way?” and “How often do kids tell nasty things about you to others?” The authors claim, “adequate reliability,” by which they mean internal consistency. That may be, but what about external reliability? A reasonable person would expect almost every kid in Western cultures to answer, “Yes, I’ve experienced unwanted and repeated aggressive behavior from other youths.” Converting answers to an ordinal scale (as Earnshaw does) does not solve the problem, as one kid’s definition of “unwanted” and “aggressive behavior” will often be different than another kid’s.

The second problem is even more critical. All humans vary in their reaction to the same social encounters. Earnshaw controlled for a host of variables, but not for predispositions. Some kids are born to be more anxious than others, which is why their “unwanted” or “aggressive” experience with others is different than another kid’s. More importantly, errors during development or inherited genotype predispose many humans to anxiety related disorders. If you don’t even measure a participant’s preexisting behavioral states, correlating later factors (e.g., bullying) with an ending behavioral state (e.g., depressed, substance abuse) is meaningless. Arguments that we cannot eliminate these problems in human studies are sound, but that necessarily means high confidence cannot be achieved.

Finally, the adolescent period is one of learning to “navigate” a complex social network. Learning only takes place if the child has “good” and “bad” social encounters, from which to compare reward values for use in future social decisions. Creating an environment in which “bad” social encounters are eliminated (e.g., “entitlement” to only pleasant social encounters) denies children a valuable component to learning. Many argue that this perpetuates “violence” (nonsense; that’s a straw man argument) or refer to a report from the National Academies of Sciences, Engineering, and Medicine (NASEM, 2016) that opens, “Bullying has long been tolerated by many as a rite of passage among children and adolescents. There is an implication that individuals who are bullied must have ‘asked for’ this type of treatment or deserve it. Sometimes, even the child who is bullied begins to internalize this idea.” That rejects out-of-hand legitimate, contrary evidence, which preconceived rejection is contrary to good scientific methods.

For these reasons, anti-bullying research is not scientific research.

Earnshaw, V. A., Elliott, M. N., Reisner, S. L., Mrug, S., Windle, M., Emery, S. T., … & Schuster, M. A. (2017). Peer victimization, depressive symptoms, and substance use: a longitudinal analysis. Pediatrics, e20163426.

Sex Is a Biological Variable

Many suggest learning and eliminating “gender bias” can erase any sex differences in non-reproductive behavioral traits. I disagree, believing instead that the brain is sexually dimorphic; there are significant differences between the sexes in cognition, “processes involved in acquiring, storing, and using information from the environment” (Shettleworth, 2013; Williams, 2017). The National Institutes of Health joins me in this assumption, issuing notice NOT-OD-15-102, reading in part, “NIH expects that sex as a biological variable will be factored into research designs, analyses, and reporting in vertebrate animal and human studies (NIH, 2015).

Everybody champions gender equity (or should), a social goal striving for equal treatment of males and females. I propose that we can reach this lofty goal by assuming brains are sexually dimorphic and weighting selection and reward criteria so that total male- and female-dominated traits are equal.

Before continuing, let’s understand what “women exhibit X behavioral trait” and “predisposition” mean. The former means the trait is a population characteristic; any single woman or man may “violate” the population’s characteristic. Moreover, most behavioral traits are complex, are continuous rather than discrete. A bell curve results when plotting individual variation. The curve might be smooth and balanced, like charting variation in height. There are equal numbers of shorter- and taller-than-average humans. Or, the curve might be skewed to one side or the other for another continuous trait. Thus, for all behavioral traits there are two bell curves, one for women and one for men (it’s okay to disagree, because whether the trait is sexually dimorphic or not, my proposal ensures equal treatment). When the two curves describing the same behavior are overlaid, areas under the two curves overlap. Thus, men are taller than women, but some women are much taller than some men.

Now, what does “predisposed” mean? Most humans (like most animals) exhibit consistent behavioral patterns in response to an environmental context (their “temperament”); they are “predisposed to X.” They don’t always show this behavior, but the chances are greater they will display X rather than Y behavior. People can change and learn new behavior, but we can put this aside. We only need to accept the common-sense observation that learning to change behavioral predispositions is not an easy task.

All organizations—businesses, schools, non-profits, and government agencies—establish behavioral criteria for hiring, salary, promotion, and other types of rewards. For instance, sales or “customer counts” are vital to most organizations, directly or indirectly. Organizations seek and reward individuals who successfully “get customers,” and “getting a customer” means competing with others. Thus, organizations hire and reward people who perform well in a competitive environment. Men (probably) thrive in competition whereas women eschew it. That is, men want to “win” whereas women want to create mutually rewarding relationships. If you assess people on competitiveness, you favor the population of all men over the population of all women. Inequity results.

That’s the case in America. Like, “is competitive,” most selection criteria favor men. As a result, more men than women occupy sales-oriented positions, which includes leadership or management. Leaders are good sales people, must “sell” the organization’s goals. Embracing the sexually dimorphic brain means organizations should balance “competitiveness” with an assessment of an individual’s skills at maintaining relationships, which favors women because they are (purportedly) better than man in affiliative behavior.

Thus, embracing the sexually dimorphic brain means the list of rewarded traits now contains at least two criteria: “is competitive” and “demonstrates affiliative behavior.” There’s no need to train women to be more competitive or men to exhibit more affiliative behavior (although it’s desirable). We simply need to ensure that the total weight of all (purportedly) male- and female-dominated traits are equal in any decision matrix for hiring and rewarding people. The list of sexually dimorphic traits that David Schmidt compiled is helpful (Schmidt, 2017).

In conclusion, let’s stop arguing that males and females are equal or can be trained to be so in any behavioral trait. Instead, let’s embrace the sexually dimorphic brain, balance the weight given to male- and female-dominated traits, and finally achieve gender equity.


NIH (2015. National Institute of Health Notice Number: NOT-OD-15-102: Consideration of Sex as a Biological Variable in NIH-funded Research. Release date: Release Date: June 9, 2015

Schmidt, D. (2017). Sculpted by Evolution. Psychology Today, November, 2017.

Shettleworth, S. J. (2013). Fundamentals of Comparative Cognition. New York, N.Y.: Oxford University Press, Inc. Williams, CA (2017). What is Cognition? CogQuiz blog.

Football and Brain

In 2016 HBO’s Real Sports with Bryant Gumbel reported on the occurrence of brain damage in pee-wee through high school football players. Bottom line from the show, you don’t have to wait to be a professional football player or even be good enough to make it in the NFL before the start of brain damage associated with playing football. If that was not enough, the wife of perhaps the greatest quarterback in the history of football (Tom Brady) reported he had suffered a concussion in the 2016-2017 season. She hoped he would retire. Then in a recent New York Times article we learn that an examination of Aaron Hernandez’s brain showed he had a severe case of chronic traumatic encephalopathy (CTE). Brain damage associated with CTE does not make one a killer. However, the sport of football does take large strong men and teaches them how to perform in a violent and aggressive sport. There is now overwhelming evidence that football is a sport where frequent blows to the head are associated with brain damage. It is possible that the brain damage associated with CTE could exacerbate impulsive and aggressive behavior. The July 25, 2017 issue of the New York Times provided a summary of an article published in The Journal of the American Medical Association by Dr. Ann McKee and colleagues (Mez J. et al. 2017) on CTE in 202 football players of which 111 had been players in the National Football League (NFL).

Mez and colleagues set out to determine the incidence of neuropathology and clinical pathology in deceased American football players. They examined the brains of 202 deceased players ranging from pre-high school players to NFL players. In general, as the level of play increased the level of neuropathology and diagnosis of CTE increased. For players who died in 2014 or later, informant reports on behavior, mood and cognitive symptoms of dementia were obtained. In general, as the neuropathological symptoms of CTE increased the symptoms of impaired behavioral and mood increased as did the likelihood of dementia. In considering these finding it is necessary to note that this was a sample of convenience. That is, the sample was not random, but was made up of brains from individuals or family members who volunteered their brains for examination. Regardless, the high incidences of neuropathology and behavioral/dementia symptoms in this nonrandom sample strongly suggest an association with playing football.

To paraphrase Waylon and Willie, ’Mammas, don’t let your babies grow up to be football players’

Mez, J., et al. (2017). Clinicopathological evaluation of chronic traumatic encephalopathy in players of American football. JAMA. 318(4):360-370.

Poverty and Brain Development

Poverty (low socioeconomic status; SES) is clearly associated with poor academic and career success as well as higher rates of obesity and anxiety-related behaviors. Poor neighborhoods seem to predispose children to anti-social behavior, with crime correspondingly increasing. Finally, evidence increasingly reveals that the brains of children raised in poverty develop differently, which appears to contribute to the noted associations. However, these are correlations. Doing something about the problem requires causal evidence. Does poverty cause these costly problems?

The answer is complicated in that we cannot manipulate growing up in poverty, which is the only way to establish causality. However, the answer, “No, poverty is not causal,” seems obvious. Lowering the poverty level tomorrow is unlikely to do anything to children suddenly finding themselves “technically” above the poverty line. Moreover, many poor children are perfectly fine; low-SES apparently had no effect on their brain development or health, and scores of children in the poorest nations on earth are also fine.

Clearly, “something” more closely associated with poverty than wealth is causal. That “something” must be an environmental factor because the brain and body “building block genes” are probably the same among poor and wealthy populations (although research into this morally unappealing possibility continues; it is theoretically possible that genetic variation largely accounts for SES). However, a gene-by-environment (G x E) interaction could be involved. That is, “something” in the environment alters genetic “products” via epigenetic processes, and, perhaps, only some children have those “susceptibility genes.” Diet deficiencies due to food availability or poor choice as well as poor schools are obvious environmental candidates (and easily if not cheaply solved), but is that all?

There is growing evidence suggesting that stress is a causal factor in the link between poverty, poor cognitive capabilities, and physical health. We’ve known for years that stress is damaging to brain development and health (see McEwen, 2013), but what’s the source of stress among the impoverished? After all, rich and poor kids face roughly the same kind of stress emanating from their social circles, like bullying, feelings of inadequacy, and other stresses resulting from social status and natural hierarchies. Increased violence is more prevalent in poor communities, but that’s probably because poverty predisposes children to anti-social behavior…which simply brings us back full circle to the problem of poverty and brain development.

The most encouraging evidence comes from a growing body of research into stress caused by perceptions of inequality. Poverty is a state of mind. Excellent animal research, which can explore causation, supports the impact of perceived social inequality and the body’s stress response. Even if the poverty line were lowered tomorrow or all poor people were given money, the perception of social inequality would remain. The research linked below makes an outstanding contribution to that research. In their cross-sectional study, Parker and colleagues observed that low SES children had lower cortical thickness, which they further determined was attributable to genes involved in the hypothalamic-pituitary-adrenal (HPA) axis, the brain’s stress response circuit. G x E interactions noted above could account for variations among the poor.

Parker, N., Wong, A. P. Y., Leonard, G., Perron, M., Pike, B., Richer, L., … & Paus, T. (2017). Income inequality, gene expression, and brain maturation during adolescence. Scientific Reports, 7.

Brain and Environment

Over half a century ago Mark R. Rosenzweig at UC Berkeley was asking if experience could produce observable changes in the brain. The question wasn’t new but in conjunction with work from his laboratory it was a harbinger of many of the developments in psychology and the neurosciences that we see today. We’ll address the ways in which current research on experience (e.g., exercise, socialization, cognitive activity, etc.) induced alterations in the brain in future blog postings, but in this report, we will describe some of the early work from Rosenzweig’s laboratory. We are summarizing the findings described in a February 1972 article in Scientific American by Rosenzweig and colleagues.

In the Rosenzweig laboratory three male rat littermates were assigned to either a baseline standard cage condition (typically 2-3 rats to a cage with bedding and a water bottle), an enriched condition (typically 12 rats in a large cage containing an assortment of ‘rat toys’ that were regularly changed), or an impoverished condition (a single rat in a cage with a view of a blank wall in a dimly lit quite room). Rats were assigned to differential environmental conditions shortly after weaning and remained in these conditions from a few days to months. After a predetermined duration of differential environments, effects on neurochemistry, neuroanatomy, and behavior of the rats were assessed. This early popularized report (1972) on brain changes associated with different environments is somewhat qualified. Rats in the enriched environment showed a thicker cortex and greater total acetylcholinesterase activity than rats in the impoverished condition. Similarly, studies of learning showed consistently superior learning by the enriched rats relatively to the rats reared in an impoverished environment. The most consistent finding was the ratio of cortex weight to subcortex weight being greater in the enriched rats.

One question that arose from these studies was what happens if the animal’s natural environment serves as the baseline condition. Rats that were placed in a large outdoor enclosure in the Berkeley Field station for 30 days showed greater brain development changes than littermates maintained in a laboratory enriched condition. Similar findings have been obtained in deer mice and Belding Ground Squirrels

These early studies may have you thinking about the possibility of different environments on the human brain and behavior. Specifically, what are the effects of poverty on brain and behavior. Perhaps you recall the reports on Genie Wiley who was isolated by her father until 13 years of age. She could not walk, speak, or socialize when found and remains profoundly impaired. Similar but lesser impairment is reported for Romanian children placed in orphanages (see the following YouTube video:

Ana Levy and Hasker Davis 08/23/2017

Susceptible versus Resilient

Post-traumatic stress disorder (PTSD) is an example of the brain “gone wrong” after trauma. We once called it “shell shock,” but it afflicts more than just soldiers. Most people experience a traumatic event at some time during their life, an event perceived as life threatening (e.g., witnessing a deadly car crash or seeing a fellow soldier killed in battle). Most people experience temporary distress, like bad dreams or bouts of crying. They eventually “get over it,” and it never impairs their ability to carry on with life. Some, however, experience worse symptoms. We don’t know for sure, but from 8 to 18% of the population have sudden “flashbacks,” dissociating them from whatever is going on now; a car backfire and suddenly scares them; they freeze. They have frequent nightmares, are often moody and “distant,” even from loved ones. They have PTSD…and it lasts a lifetime.

We all respond to trauma in the same way. The brain goes into “alert mode,” our “stress response,” activating the hippocampal-pituitary-adrenal (HPA) axis. The hypothalamus releases corticotropin releasing hormone (CRH) and arginine vasopressin (AVP), which together cause the pituitary to release adrenalin (adrenocorticotropic hormone, ACTH). Having global effects, the hormones reach the amygdala, our “memory center.” Finally (in a matter of seconds, though), the adrenal cortex releases glucocorticoids, the most common being cortisol. That heightens the sympathetic nervous system, changes the senses needed to respond quickly to perceived threat. We’re ready for “flight or fight.” However, cortisol “loops back” and dampens the release of CRH, eventually returning the stress response system to its normal state (called “negative feedback”).

The stress response system is adaptive; that is, it evolved via natural selection, is “good for us.” We are resilient to stress. But if it’s good for us, why is bad for some people? That is, why are some people resilient to trauma, while others are susceptible to PTSD?

We don’t have a complete answer, but we can “see” differences in the brains of resilient versus susceptible people (“seeing” means post mortem assays, self-report diaries, brain scans, and animal studies). The HPA axis of PTSD sufferers is dysfunctional, producing too little of the hormones needed for “negative feedback” and a return to normal. Moreover, the amygdala and its memory functions are not normal, are less able to form new memories while being overwhelmed by memories of the trauma.

Going deeper in the brain, we find that most PTSD sufferers have single nucleotide polymorphisms (SNPs) in several genes key to HPA axis function: FKBP5, NR3C1, CRHR1, and CRHR2. Their genes (“alleles”) are different. However, some people have these SNPs but don’t suffer PTSD after trauma. Additionally, many with PTSD experienced childhood abuse. Thus, we have a strong case suggesting that PTSD represents a gene-by-environment (G x E) interaction. Early childhood abuse “triggers” those alleles, altering the HPA axis so it is dysfunctional when experiencing adult trauma.

Clarence A. “Sonny” Williams

Marian C. Diamond

I was saddened to receive in my New York Times notificationS on the brain a notice of the death of Marian C. Diamond. I’ve attached the following link to the Science section report of her death:

I was a Psychology graduate student in the laboratory of Mark Rosenzweig (pictured on the left in the linked article) during the 1970s. Most of my work was with Mark Rosenzweig and Edward Bennett on the role of brain protein synthesis in long-term memory formation. Other work pioneered by Rosenzweig, Bennett, and Diamond was on the brain effects induced by the environment. The brain changes induced by differential laboratory environments (enriched, impoverished, and standard) were counter to the dogma of the first half of the 20th century that presented the brain as a fixed structure. I asked to work in this paradigm and was given the assignment of looking at brain changes in young, middle-aged, and old rats placed in differential environments for differing amounts of time. Dr. Diamond provided guidance on what features of the brain I should examine and taught me Golgi staining as well as several other basic techniques I needed for my research. After 18 months of changing the toys and mazes daily in the enrichment cages and daily feeding, watering and cleaning a respiratory virus struck the rat colony and I lost 90% of my animals. I fell back on the protein synthesis studies I had done for my dissertation. I was disappointed and Marian Diamond invited me to dinner at her home with her and her husband. They reminded me how much I’d learned, the wonderful people I’d worked with, the great institution I was at, and that the last 18 months was certainly not a loss. I don’t remember much about neuronal staining techniques, but I do remember that Marian Diamond was an exceptional mentor and teacher

By Hasker Davis