This feminist wave served as a rallying call for scientists to provide evidence in favour of the status quo, and to demonstrate how harmful it would be to give power to women...
In the nineteenth century, with the emergence of interest in science and scientific principles, there was a focus on linking society’s structures and functions to biological processes, as characterised by early forms of social Darwinism. Among the intellectuals of the day, there were continuing concerns about the ‘woman question’, the increasing demands from women for rights to education, property and political power. This feminist wave served as a rallying call for scientists to provide evidence in favour of the status quo, and to demonstrate how harmful it would be to give power to women – not only for the women themselves but for the whole framework of society. Even Darwin himself weighed in, expressing his concern that such changes would derail mankind’s evolutionary journey. Biology was destiny and the different ‘essences’ of men and women determined their rightful (and different) places in society.
The views expressed by other scientists indicated that they were likely to be less than objective in their approach to this issue. A favourite quote of mine comes from Gustave Le Bon, a Parisian interested in anthropology and psychology. His man focus was on demonstrating the inferiority of non-European races, but he clearly had a special place in his heart for women:
Without a doubt there exist some distinguished women, very superior to the average man but they are as exceptional as the birth of any monstrosity, as, for example, of a gorilla with two heads; consequently, we may neglect them entirely.
Brain size was an early focus in this campaign to prove the inferiority of women and their biology. The fact that the only brains that researchers had access to were dead ones did not stand in the way of trenchant brain-based observations on women’s lesser mental capacities (and, while they were at it, on those referred to at the time as ‘coloured people, criminals and the lower classes’). In the absence of direct access to brains inside the skull, head size was initially adopted as a stand-in for brain size. Le Bon again was an eager exponent of this ‘research’, developing a portable cephalometer which he took around with him to measure the heads of those whose ‘mental constitutions’ would be more or less likely to stand up to the rigours of independence and education. Here we have another example of his penchant for ape comparisons: ‘There are a large number of women whose brains are closer in size to those of gorillas than to the most developed male brains . . . This inferiority is so obvious that no-one can contest it for a moment.’
Skull capacity was another eagerly adopted index in the hunt for ways of proving the link between brain size and intellect. Bird seed or buckshot was poured into empty skulls and the amount required to fill it was weighed. An early finding that, on average, women’s brains were five ounces lighter than men’s by this measure was enthusiastically seized upon as all the proof that was needed. Clearly, Nature had awarded men five extra ounces of brain matter, and this was the secret of their superior abilities and their right to positions of power and influence.
However, there was a flaw in this argument, as the philosopher John Stuart Mill pointed out: ‘a tall and large-boned man must on this showing be wonderfully superior in intelligence to a small man and an elephant and a whale must prodigiously excel mankind’. Various contortions followed, including a brain size– body size calculation, but that didn’t come up with the ‘right’ answer either. This is known in the business as the Chihuahua paradox: if you claim that a brain/body weight ratio as a measure of intelligence, then Chihuahuas should be the most intelligent dogs of all.
Perhaps more details about the brain’s container, the skull itself, might help to produce the ‘right’ answer? This is where the science of craniology, or skull measurement, stepped in. Based on detailed measurements of every possible angle, height, ratio, forehead perpendicularity and jaw juttedness, craniology seemed to offer a suitable answer. The twists and turns of craniology and its measurements were complex and varied. Facial angles were particularly popular, calculated by looking at the angle in profile between a line drawn horizontally from the nostril to the ear and one from the chin to the forehead. A nice big angle, with the forehead pretty much in line with the chin, was a measure of what was termed ‘orthognathism’; a small acute angle, with a jutting chin way in advance of a receding forehead, was a measure of ‘prognathism’. By devising a scale from orangutans through central Africans to European males, craniologists produced the satisfying finding that orthognathism was characteristic of the evolutionarily superior, higher races. However, with respect to fitting women on this scale, a problem emerged: women, on average, turned out to be more orthognathic than men. Fortunately, help was at hand.
An early finding that, on average, women’s brains were five ounces lighter than men’s by this measure was enthusiastically seized upon as all the proof that was needed.
The German anatomist Alexander Ecker, whose paper reported this disturbing observation, noted that advanced orthognathism was also characteristic of children so on this basis women could be characterized as infantile (and, thus, inferior). These suggestions were backed up by the findings of one John Cleland who, writing in 1870, reported on his painstaking catalogue of thirty-nine different measurements of ninety-six different skulls, which were all either ‘civilised’ or ‘uncivilised’, some male, some female, one a ‘Hottentot chief ’, some ‘cretins and idiots’, another a ‘savage Spanish pirate’, and one the skull of a Fife man named Edmunds executed for the murder of his wife. (We are told that Edmunds was from Fife and that he carried out the murder ‘under circumstances of provocation’. We are not told whether either of these two facts earned him a ‘civilised’ or an ‘uncivilised’ classification). One particular measure in Cleland’s catalogue, the ratio of the arch of the skull to its baseline, neatly ensured that adult females were distinct from adult males, and (mainly) distinguishable from members of ‘uncivilised’ nations.
There was to be no stone unturned (or skull unexamined) in the hunt for the proof of women’s inferiority. One paper used over 5,000 measurements on a single skull. There were seemingly infinite ways of measuring the skull, with the focus on those that not only best differentiated men from women, but also ensured that women were reliably characterised as inferior, either childlike or similar to reviled ‘lower’ races.
A group of mathematicians at University College London soon got involved in the great measuring game, and their findings would end up leaving craniology in disrepute. This group of researchers, headed by Karl Pearson, the father of statistics, also included Alice Lee, one of the first women to graduate from London University. Lee created a mathematically based volumetric formula to work out skull capacity, which she intended to correlate with intelligence. She used this measurement on a group of thirty women students from Bedford College, twenty-five male staff at UCL and (a good move, this) a group of thirty-five leading anatomists who attended a meeting of the Anatomical Society in Dublin in 1898.
The results of her study were the nail in the coffin for craniology; she found that one of the most eminent of these anatomists had one of the smallest heads and, indeed, that one of her future examiners, a Sir William Turner, was eighth from the bottom. The discovery that these eminent men’s heads were on the smaller side magically created a large number of instant converts to the conclusion that linking skull capacity to intelligence was obviously ludicrous (especially as some of the Bedford students had greater cranial capacities than the anatomists). A series of other such studies followed and in a 1906 paper Pearson declared that measure of head size was not an effective indication of intelligence.
So craniology had had its day, but there were plenty of other sex difference explainers waiting in the wings. Another technique soon evolved out of craniology, which focussed on the mapping of different ‘skill areas’ onto the brain (though, again, without access to the means of directly measuring these). Moving from buckshot to bumps, scientists now focussed on the surfaces of skulls, scrutinising them for evidence of different-sized protuberances, which were taken to reflect the different landscapes of the underlying brains. This led to the infamous ‘science’ of phrenology, developed by Franz Joseph Gall, a German physiologist, who claimed that personality characteristics such as ‘benevolence’, ‘cautiousness’ or even the capacity to produce children could be assessed by measuring the relevant bit of a person’s skull. This technique was popularised by Johann Spurzheim, a German physician who was initially a student of Gall’s but, after a disagreement with him, established his own career as an exponent of phrenology. The claim of this system was that the different-sized bumps on the skull reflected the different sizes of the many different ‘organs’ of the brain, and that these organs controlled different individual characteristics such as combativeness, philoprogenitiveness or cautiousness. Again, there was, perhaps unsurprisingly, a neat matching of the bigger bumps on male skulls with more superior faculties.
Phrenology became particularly popular in the United States and, in some circles, was enthusiastically adopted by women. In an odd sort of early self-help movement, women were encouraged to ‘know thyself ’ by getting their phrenological profile read. One strange outcome was the simpering claim that this ‘science’ provided proof that ‘we women’ were indeed lower down a social hierarchy than our differently bumped male counterparts and that we should, with relief, acknowledge our place in the pecking order.
Phrenology eventually fell into disrepute by the middle of the nineteenth century, partly because of the unreliability of the measurements and the lack of any systematic testing of its theories. But the notion that specific psychological processes could be localised to discrete areas of the brain lived on, partly supported by the emergence of neuropsychology, matching parts of the brain to specific aspects of behaviour. Scientists began to study patients who had suffered significant injuries to specific parts of the brain in the hope that their ‘before and after’ behaviour would reveal the exact function of those parts. In the mid-nineteenth century, the French physician Paul Broca established a link between localised damage in the left frontal lobe and speech production. His first clue came from the post-mortem examination of the brain of a patient called ‘Tan’, thus named because that was all he could say, although it was clear he could understand speech. The area of damage that was discovered, on the left-hand side of Tan’s frontal lobe, is still called Broca’s area.
In looking for sex differences, neurologists cheerily matched their assumptions about which bits of the brain were the most important to their findings about which bits of the brain were largest in males, even if it meant reversing earlier conclusions.
More powerful evidence of the links between brain and behaviour was shown by the reported changes in behaviour of one Phineas Gage, an American railway worker who, in 1848, while preparing to blast rocks by tamping down some dynamite with an iron rod, set off an explosion which blew the rod through his left cheek and out of the top of his head, taking a substantial chunk of his frontal lobes with it. He was treated and subsequently studied by the physician John Harlow, who wrote up his observations in two papers with the informative titles of ‘Passage of an Iron Rod through the Head’ (1848) and ‘Recovery from the Passage of an Iron Bar through the Head’ (1868). The reported changes in Gage’s behaviour – sober and industrious before the accident; surly, impulsive, uninhibited and unpredictable after – were interpreted as showing that the frontal lobes were the seat of ‘higher intellect’ and civilised conduct. Forming as they do some thirty per cent of the human brain, as compared to about seventeen per cent in chimpanzees, the suggestion that within these lobes lay the higher powers that make us human made intuitive sense.
Enthusiastic bouts of cortical map making followed, with a focus on pinpointing where in the brain things were happening, more than when or how. Early models of the brain thought of it as a collection of specialised units or modules, each almost solely responsible for some particular skill. So if you wanted to find out where a skill was localised in the brain, you usually studied someone who had lost that skill following a brain injury. Broca’s and Harlow’s patients are probably the best-known examples of this. The loss of a particular part of language by Tan and the change in personality shown by Gage ‘localised’ these aspects of human behaviour to the frontal lobes.
In looking for sex differences, neurologists cheerily matched their assumptions about which bits of the brain were the most important to their findings about which bits of the brain were largest in males, even if it meant reversing earlier conclusions. For example, a paper in 1854 reported that women often had more extensive parietal lobes than men, whose brains were characterised by larger frontal lobes, thus earning the former the generic title of Homo parietalis and the latter Homo frontalis. However, during a brief fashion for identifying the parietal lobes as the seat of human intellect, neurologists had to quickly back-pedal and report that female parietal lobes had in fact been mismeasured and women actually had larger frontal areas than had previously been thought. It was not scientific research’s finest hour.
As the turn of the century approached, declarations of inferiority gave way to references to the ‘complementary’ nature of women’s alternative attributes (as defined, of course, by men). This was a concept that had its roots in eighteenth-century philosophy and ideas that justified the unequal distribution of citizens’ rights. As Londa Schiebinger summarises:
Henceforth, women were not to be viewed merely as inferior to men but as fundamentally different from, and thus incomparable to, men. The private, caring woman emerged as a foil to the public, rational man. As such, women were thought to have their own part to play in the new democracies – as mothers and nurturers.
The ‘complementary roles’ set aside for women ensured their inferior position in (if not, indeed, their absence from) most spheres of influence. A classic example of this approach is Jean-Jacques Rousseau’s enthusiasm for the ‘domestication’ of woman, her weaker constitution and unique mothering skills rendering her unfit for any kind of education or political activism. This was reflected in the opinions of other leading intellectuals such as anthropologist J. McGrigor Allan, who claimed when talking to the Royal Anthropological Institute in 1869:
In reflective power, woman is utterly unable to compete with man; but she possesses a compensating gift in her marvellous faculty of intuition. A woman will (by a power similar to that sort of semi-reason by which animals avoid what is hurtful, and seek what is necessary to their existence) arrive instantaneously at a correct opinion on a subject to which a man cannot attain, save by a long and complicated process of reasoning.
As well as only being blessed with animal-like semi-reason, women’s inferior biology was also identified as further justification for exclusion from the corridors of power. The vulnerability caused by the demands of their reproductive system was a constant thread in the assertions. McGrigor Allan again, also apparently an expert on the effects of menstruation, declared:
At such times, women are unfit for any great mental or physical labour. They suffer under a languor and depression which disqualify them for thought or action, and render it extremely doubtful how far they can be considered responsible beings while the crisis lasts . . . Much of the inconsequent conduct of women, their petulance, caprice, and irritability, may be traced directly to this cause . . . Imagine a woman, at such a time, having it in her power to sign the death-warrant of a rival or a faithless lover!
The contention of a direct link between biology and brain meant that overtaxing one could damage the other. In 1886, William Withers Moore, then president of the British Medical Association, warned of the dangers of overeducating women, asserting that their reproductive systems would be affected and they would succumb to the disorder ‘anorexia scholastica’, becoming more or less sexless and certainly unmarriageable. Although the importance of ‘mate choice’, a keystone of Darwin’s theory of sexual selectivity, was not much in vogue at this time, a woman’s status was certainly closely determined by who she was married to, so diminishing your chances on the marriage market was a significant social threat.
The century came to a close with brain differences still a given, with the added acknowledgement of the fragility and vulnerability of the female. This was helpfully illustrated by the many ‘mad, bad or sad’ heroines in the literature of the time; women like Charlotte Brontë’s Lucy Snowe, the heroine of Villette, Thomas Hardy’s Maggie Tulliver, from The Mill on the Floss, or Catherine Earnshaw, the heroine of Emily Brontë’s Wuthering Heights, were all doomed by their wilful attempts to overturn the natural order of things.
This is an extract from The Gendered Brain by Gina Rippon.