Category Archives: Sport

Testosterone And Its Androgenic Anabolic Derivatives – One Small Drop Of Liquid Hormone That Can A Man Make And Can A Man Break

I watched a great FA Cup football final last night, and was amused as always when players confronted each other after tackles with aggressive postures and pouting anger-filled stares – all occurring in front of a huge crowd looking on and under the eyes of the referee to protect them. On Twitter yesterday and this morning I was engaged in a fun scientific debate with some male colleagues and noted that each time the arguments became ‘ad hominem’ the protagonists became aggressive and challenging in their responses, and only calmed down and became civil again when they realized it is banter. I have over many years watched my wonderful son grow up daily, and now he is ten have observed some changes occurring in him that are related to increasing development of ‘maleness’ which occurs in all young men of his age. In my twenties while completing my medical and PhD training, I worked part time as a bouncer, and it was always fascinating to see the behaviour of males in the bars and clubs I worked in then change when around females ‘dressed to kill’ and out for the evening. With the addition of alcohol this became a dangerous ‘cocktail’ late in the evenings, with often violence breaking out as the young men tried to establish their dominance and ‘turf’, or as a result of perceived negative slights which ‘honour’ demanded they respond to, and which resulted in a lot of work for me in the bouncer role to sort out. All this got me thinking of the male hormone testosterone and its effect on males through their lifetime, both good and bad.

Testosterone is the principal male sex hormone that ‘creates’ the male body and mind from the genetic chromosomal template supplied at conception. It is mostly secreted by the testicles in men, and to a lesser degree from the ovaries in women, with some secretion also from the adrenal glands. There is approximately 7-8 times higher concentration of testosterone in males than females, but it is present also in females, and females are susceptible to (and may even be more sensitive to) its actions. Testosterone is a steroid type hormone, derived originally from cholesterol related chemical substances which are turned into testosterone through a complex pathway of intermediate substances. Its output from the testes (or ovaries) is stimulated by a complex cascade of neuro-hormonal signals that arise from brain structures (gonadotrophin release hormone is released by the hypothalamus structure in the brain and travels to the pituitary gland, which in turn releases luteinizing hormone and follicle stimulating hormone, which travels in the blood to the testicles and in turn cause the release of testosterone into the bloodstream) in response to a variety of external and internal stimuli (though what controls testosterone’s release, and how it is controlled, in this cyclical manner over many years is almost completely unknown). The nature of ‘maleness’ has been debated as a concept since antiquity, but it was in the 1800’s that real breakthroughs in the understanding that there was a biological basis to ‘maleness’ occurred, with hormones being identified as chemical substances in the blood, and several scientist folk such as Charles Brown-Sequard doing astonishing things like crushing up testicles and injecting the resultant product into their own bodies to demonstrate the ‘rejuvenating’ effect of the ‘male elixir’. Eventually in the late 1800’s testosterone was isolated as the male hormone – it was named as a conglomerate derivative of the words testicle, sterol and ketone – and in the 1930’s, the ‘golden age’ of steroid chemistry, its structure was identified, and synthetic versions of testosterone were produced as medical treatment analogues for folk suffering from low testosterone production due to hypogonadism (reduced production of testosterone due to testicular function abnormality) or hypogonadotropism (reduced production of testosterone due to dysfunction of the ‘higher’ level testosterone release control pathways in the brain described above).

Testosterone acts in both an anabolic (muscle and other body tissue building) and androgenic (male sex characteristic development) manner, and one of the most fascinating things about it is that it acts in a ‘pulsatile’ manner during life – increasing dramatically at very specific times in a person’s life to effect changes that are absolutely essential for both the development and maintenance of ‘maleness’. For example, in the first few weeks after conception in males there is a spike in testosterone concentration in the foetus that results in the development of genitals and prostate gland. Again, in the first few weeks after birth testosterone concentrations rise dramatically, before attenuating in childhood, after which a further increase in the pre-puberty and the pubertal phases occurs, when it is responsible for increases in muscle and bone mass, the appearance of pubic and axillary hair, adult-type body odour and oily skin, increased facial hair, deepening of the voice, and all of the other features associated with (but not all exclusive to) ‘maleness’. If one of these phases are ‘missed’, normal male development does not occur. As males age, the effects of continuously raised testosterone associated with adulthood become evident as loss of scalp hair (male pattern baldness) and increased body hair, amongst other changes. From around the age of 55 testosterone levels decrease significantly, and remain low in old age. Raised testosterone levels have been related to a number of clinical conditions that in the past have been higher in males than females, such as heart attacks, strokes and lipid profile abnormalities, along with increased risk of prostate (of course it’s not surprising that this is a male specific disorder) and other cancers, although not all studies support these findings, and the differences in the gender-specific risk of cardiovascular disorders in particular is decreasing as society has ‘equalized’ and women’s work and social lives have become more similar to those of males in comparison to the more patriarchal societies of the past.

More interesting than the perhaps ‘obvious’ physical effects are the psychological effects of testosterone on ‘male type’ behaviour, though of course the ‘borders’ between what is male or female type behaviour are difficult to clearly delineate. Across most species testosterone levels have been shown to be strongly correlated with sexual arousal, and in animal studies when an ‘in heat’ female is introduced to a group of males, their testosterone levels and sex ‘drive’ increases dramatically. Testosterone has also been correlated with ‘dominance’ behaviour. One of the most interesting studies I have ever read about was one where the effect of testosterone on monkey troop behaviour was examined, in which there are strict social hierarchies, with a dominant male who leads the troop, submissive males who do not challenge the male, and females which are ‘serviced’ only by the dominant male and do not challenge his authority. When synthetic testosterone was injected into the males, it was found that the dominant male become increasingly ‘dominant’ and aggressive, and showed ‘challenge’ behaviour (standing tall with taught muscles in a ‘fight’ posture, angry facial expressions, and angry calls, amongst others) more often than usual, but in contrast, there was no effect or change of the testosterone injections on non-dominant male monkeys. When the females were injected with testosterone, most of them became aggressive, and challenged the dominant male and fought with him. In some cases the females beat the dominant male in fighting challenges, and became the leader of the troop. Most interestingly, these ‘became dominant’ females, when the testosterone injections were discontinued, did not revert back to their prior submissive status, but remained the troop leader and maintained their dominant behaviour even with ‘usual’ female levels of testosterone. This fascinating study showed that there is not only a biological effect of testosterone in social dominance and hierarchy structures, but that there is also ‘learned’ behaviour, and when one’s role in society is established, it is not challenged whatever the testosterone level.

Raised testosterone levels have also been linked with level of aggression, alcoholism, and criminality (being higher in all of these conditions) though this is controversial, and not all studies support these links, and it is not clear from the ‘chicken and egg’ perspective if increased aggression and antisocial behaviour is a cause of increased testosterone levels, or is a result of it. It is also been found that athletes have higher levels of testosterone (both males and females) during sport participation, as have folk watching sporting events. In contrast, both being ‘in love’ and fatherhood appears to decrease levels of testosterone in males, and this may be a ‘protective’ mechanism to attenuate the chance of a male ‘turning against’ or being aggressive towards their own partners or children. Whether this is true or not requires further work, but clearly there is a large psychological and sociological component to both the functionality and requirements of testosterone, beyond its biological effects. One of the most interesting research projects I have been involved with was at the University of Cape Town in the 1990’s, where along with Professor Mike Lambert and Mike Hislop, we studied the effect of testosterone ingestion (and reduction of testosterone / medical castration) on male and female study participants. We found not only changes in muscle size and mass in those taking testosterone supplements, but also that participants ingesting or injecting testosterone had to control their aggression levels and be ‘careful’ of their behaviour in social situations, while women participants described that their sex drive increased dramatically when ingesting synthetic testosterone. In contrast, men who were medically castrated described that their libido was decreased during the study time period when their testosterone levels were reduced by testosterone antagonist drugs to very low levels (interestingly they only realized this ‘absence’ of libido after being asked about it). All these study results confirm that testosterone concentration changes induce both psychological and social outcomes and not just physical effects.

Given in particular its anabolic effects, testosterone and its synthetic chemical derivatives, known commonly as anabolic steroids, became attractive as a performance enhancing drug by athletes in the late 1950’s and 1960’s as a result of it being massed produced synthetically from the 1930’s, and as athletes became aware of its muscle and therefore strength building capacity after its use in clinical populations. Until the 1980’s, when testing for it as a banned substance meant it became risky to use it, anabolic steroids were used by a large number of athletes, particularly in the strength and speed based sporting disciplines. Most folk over 40 years old will remember Ben Johnson, the 1988 Olympic 100m sprint champion, being stripped of his winner’s medal for testing positive for an anabolic steroid hormone during a routine within-competition drug test. Testosterone is still routinely used by body-builders, and worryingly, a growing number of school level athletes are being suggested to be using anabolic steroids, as well as a growth of its use as a ‘designer drug’ in gyms to increase muscle mass in those that have body image concerns. An interesting study / article pointed out that boy’s toys have grown much more ‘muscular’ since the 1950’s, and that this is perhaps a sign that society places more ‘value’ on increased muscle development and size in contemporary males, and this in a circular manner probably puts more pressure on adolescent males to increase their muscle size and strength due to perceived societal demands, and thereby increases the pressure on them to take anabolic steroids. There is also suggested to be an increase in the psychological disorder known as ‘muscle dysmorphia’ or ‘reverse anorexia’ in males, where (mostly) young men believe that no matter how big they are muscle size wise, they are actually thin and ‘weedy’, and they ‘see’ their body shape incorrectly when looking in the mirror. This muscle dysmorphia population is obviously highly prone to the use of (perhaps one should say abuse) anabolic steroids as a group. There appears to be also an increase in anabolic steroid use in the older male population group, perhaps due to a combination of concerns about diminishing ‘male’ function with increasing age, a desire to maintain sporting prowess and dominance, and a perception that a muscular ‘body beautiful’ is still desirable by society even in old age – which is a concern due to the increased cardiovascular and prostate cancer risks taking anabolic steroids can create in an already at-risk population group. There is also a growth in the number of women taking anabolic steroid / synthetic testosterone, both due to its anabolic effects and its (generally) positive effects on sex drive, and a number of women body builders use anabolic steroids for competitive reasons due to its anabolic effect on muscles, despite the risk of the development of clitoral enlargement, deepening voice, and male type hair growth, amongst other side effects, which potentially can result from females using anabolic steroids. Anabolic steroid use therefore remains an ongoing societal issue that needs addressing and further research, to understand both its incidence and prevalence, and to determine why specific population groups choose to use them.

It has always been amazing to me that a tiny biological molecule / hormone, which testosterone is, can have such major effects not only on developing male physical characteristics, but also on behavioural and social activity and interactions with other folk, and in potentially setting hierarchal structures in society, though surely this ‘overt’ effect has been attenuated in modern society where there are checks and balances on male aggression and dominance, and females now have equal chances to men in both the workplace and leadership role selection. Testosterone clearly has a hugely important role in creating a successfully functioning male both personally and from a societal perspective, but testosterone can also be every males ‘worst enemy’ without social and personal ‘higher level’ restraints on its potential unfettered actions and ways of working. It has a magic in its function when its effects are seen on my young son as he approaches puberty and suddenly his body and way of thinking changes, or when its effects are seen (from its diminishment) in the changes of a man in love or in a new father. Perhaps there is magic also in the reduction of testosterone that occurs in old age, as this is likely to be important in allowing the ‘regeneration’ of social structures, by allowing new younger leaders to take over from previously dominant males, by this attenuation of testosterone levels perhaps making older males ‘realize’ / more easily accept that their physical and other capacities are diminished enough to ‘walk away’ gracefully from their life roles without the surges of competitive and aggressive ‘feelings’ and desires a continuously high level of testosterone may engender in them if it continued to be high into old age. But testosterone has an ugliness in its actions too, which was evident in my time working as a bouncer in bars and clubs, when young men became violent with other young men as a way of demonstrating their ‘maleness’ to the young females who happened to be in the same club and were the (usually) unwitting co-actors in this male mating ritual drama which enacted itself routinely on most Friday and Saturday nights, usually fuelled by too much alcohol. Its ugliness is also evident on the sporting field when males kick other men lying helpless on the ground in a surge of anger due to losing the game or for a previous slight, despite doing so within the view of a referee, spectators and TV cameras. Its ugliness is also evident in the violence that one sees in fans after a soccer game preying on rival fans due to their testosterone levels being high due to watching the game, and in a myriad of other social situations where males try to become dominant to lever the best possible situation or to attract the best possible mate for themselves, at the expense of all those around them – whether in a social or work situation, or a Twitter discussion, or even a political or an academic debate – the ‘male posturing’ is evident for all to see in each situation, whether it is physical or psychological. Perhaps it was not for the sake of a horseshoe that the battle was lost, but rather because of too little, or too much, testosterone coursing around the veins of those directing it. There are few examples as compelling as that of the function of the hormone testosterone in making male behaviour what it is which demonstrates how complex, exquisite and essential the relationship between biological factors and psychological behaviour and social interplay is. What truly ‘makes up’ a man and what represents ‘maleness’ though, is of course another story, and for another debate!


Anterior Cruciate Knee Ligament Injuries – The End Of The Affair For Most Sports Careers Despite The Injury Unlocking Exquisite Redundant Neuromuscular Protective Mechanisms

I was watching a rugby game recently and saw a player land wrongly in a tackle and immediately collapse to the ground clutching his knee joint, and heard later that he had suffered a ruptured anterior cruciate ligament injury that would require nine months post-injury before he would be able to return to his chosen sport. Many years ago in my student days, after a few too many beers at a party, I jumped off a low wall, landed wrongly, and tore the meniscus in my left knee. The next day it had swollen up, but I did not think much of it and tried to drive to University, and always remember the horror I felt when getting to the bottom of the road and I tried to push in the clutch with my left leg to allow use of the brake at the stop street, and my leg would not react at all, and I only avoided an accident by turning off the car while working the brake pedal with my right foot. It always puzzled me afterwards why my leg would not respond at all despite my ‘command’ for it to do so, as even with the injury, I expected, while perhaps it might be painful to do so, that I would still have reasonable control over my leg movements, which appeared okay when walking slowly to the car and taking my weight on my uninjured leg. Perhaps this triggered a ‘deep’ interest in what controlled our muscles and other body functions, and when I started a PhD degree with Professors Tim Noakes, Kathy Myburgh and Mike Lambert as my supervisors at the University of Cape Town in the early 1990’s, I chose to look at neural reflexes and brain control mechanisms regulating lower limb function after anterior cruciate ligament knee injury. So what happens when the knee joint suffers a major injury, and can one ever ‘come back’ from it?

The knee joint is one of the most precarious joints in the body, and as compared to the hip and shoulder joints, which have quite a degree of stability generated by their ‘ball and socket’ design, it is simply made up of three individual bones (the femur, tibia and patella) moving ‘over’ each other while being attached to each other with a number of ligaments and muscles, which are pretty much all that creates stability in and around the knee joint. The knee mostly moves in a backwards / forwards (in medical terms flexion and extension) plane, and has a small degree of rotation inwards and outwards, but is basically a ‘hinge’ type joint that moves in one plane only. The major ligaments of the knee joint preventing too much flexion and extension are the anterior cruciate ligament (ACL), which prevents hyper-extension (the lower limb calf region moving too far ‘forwards’ relative to the upper thigh) and the posterior cruciate ligament (PCL), which prevents hyper-flexion of the knee joint. There are also relatively strong ligaments on each side of the knee joint (the medial and lateral collateral ligaments), as well as several ligaments and tendons securing the patella in place in the front of the knee. Two large pieces of cartilage, the medial and lateral menisci, ‘sit’ on the tibia and allow smooth movement to occur across the entire range of movement between the two big bones (femur and tibia) of the knee joint and protect each of these from damage which would occur if they ‘rammed’ into each other each time the bone moved without the protection of the two menisci.

While these ligaments (and there are several others in the knee joint beyond those I have described above), tendons and menisci provide the majority of support to maintain the fidelity of the knee joint, the surrounding muscles – particularly the quadriceps and hamstrings muscles – also provide important secondary support to the knee joint during active movement such as walking or running, when a greater degree of dynamic stability beyond the static stability the ligaments and tendons supply, is needed. So muscles are not just creators of movement, they are also important stabilisers of the body’s joints, and there needs to be a high degree of dynamic control of them by the central nervous system during movement to ensure things work ‘just right’ with not too much and not too little force being applied to the joint at any one time during any movement. The hamstring muscles have been shown to be agonists (assistants) of the ACL, and when they fire they ‘pull back’ the lower part of the knee joint so as to reduce pressure on the ACL when the knee extends to its limits, while the quadriceps muscles similarly assist the PCL from having too much pressure on it associated with too much flexion of the knee joint (though only at certain angles of the knee joint and not through its entire range of movement). Interestingly, the quadriceps muscles are not just agonists of the PCL, but also are ‘antagonists’ of the ACL, and their activation can also increase hyper-extension pressure on the knee joint (and therefore on the ACL) when the quadriceps contracts particularly when the knee is in an extended position. So the quadriceps muscles can be the ‘friend’ of the ACL and knee joint, but can also be its ‘foe’.

What is fascinating in this process is the structure and function of the nerve pathways both from and to all of the knee joint, ACL and muscles around them, and how these nerve pathways act differently in an intact ACL as compared to the damaged ACL state. In the intact ACL are mechanoreceptors (receptors which pick up mechanical pressure) which fire when the ACL is put under pressure / moves, and they send information back via nerves to the spinal cord, and cause increased firing of the hamstring muscles, in order to protect both the ACL and integrity of the entire knee joint. When the ACL is ruptured, receptors called free nerve endings in the surrounding capsule of the knee joint fire in response to movement of the entire knee joint, which would happen to a greater degree in the absence of the ACL after it ruptures, and importantly, these injury associated capsular free nerve ending reflexes don’t just increase the firing to the hamstrings muscles, they at the same time reduce firing to the quadriceps muscle, in order to protect the knee from further damage which could occur if the quadriceps were active maximally in the absence of the ACL. This free nerve ending pathway is known as a redundant pathway, as it only ‘fires’ when the ACL is damaged, and does not do so normally. Interestingly, the redundant free nerve ending related pathway does not seem to stop working even if the ACL is repaired or replaced, which means that even if one fixes the ligament materially, one cannot ever completely repair the sensitive neuronal control pathways as part of the operation.

While these redundant neural firing pathways are protective and are designed to help the knee from incurring further damage, they are unfortunately not helpful in allowing athletes who suffer ACL injuries from getting back to their full strength and a return to sport with the one hundred percent function they had prior to suffering the injury. The quadriceps muscles inhibitory firing pathway is particularly a problem from a return to sport perspective, as it means that the quadriceps muscles will always be weaker than before the ACL injury, and this is born out from most studies of quadriceps strength after injury, which show a continued deficit of at least 5-10 percent injured limb compared to the unaffected limb, and that is when rehabilitation of the injured limb is done post-injury or operation, and is even higher when it is not. Furthermore, the altered firing synergies, even those of the increased hamstring firing, appear to be sub-optimal from a functional pattern of movement perspective, even if they are protective, and there even appears to be whole body / both limb firing pattern changes, with athletes favouring the injured leg and taking more weight on the uninjured limb even if they are unaware of themselves doing this (though some folk speculate that using crutches for a prolonged period of time after ACL injury may be in part a cause of these whole limb and gait changes). These changes surely are at least to a degree responsible for the high rate of re-injury of the damaged ACL observed in those athletes who return to competitive sport after ACL injury, and potentially the high rate of ACL or other knee joint injury in the unaffected limb which some folk suggest occurs with return to sport after ACL injury.

So therefore, sadly for those who suffer ACL (and other) knee injuries and want to return to competitive sport, or to their pre-injury level of sport, redundant neural mechanisms between the knee joint and the surrounding muscles, while functionally being designed to give a measure of protection to the knee joint in the case where the ACL is damaged or absent, paradoxically ensures by its very activity that the function of the surrounding muscles is attenuated, particularly in the quadriceps muscle, and they will never have ‘full’ functional activity of the knee joint after the injury, despite them having a brilliant surgeon who performs a perfect mechanical replacement of the ACL surgically, and despite the best rehabilitative efforts of either the athlete or those assisting them with their rehabilitation. An athlete has two choices after suffering an ACL injury (and other associated ligament injuries which worsen the prognosis even more). Firstly, they can attempt to return to their sport as they did it before their injury but with changing how they perform it by ‘compensating’ for their injury – if in team sports by improving other aspects of their game so that their reduced capacity for agility and speed after injury is not ‘noticed’, and in individual sports by altering pacing strategy or style of performing their sport (though particularly in individual sports this is not really an option and the loss of competitive capacity is ‘painfully obvious’), and with the awareness that that they have a good chance of re-injuring themselves. Secondly, they can downgrade their expectations and level of sport, either retiring from their sport if competitive or changing the level of intensity they routinely perform their sport to a lower level, as hard as it is for athletes to come to terms with having to do this. But there is no ‘going back’ to what life was like before the injury, and this creates a potential ethical dilemma for those involved in rehabilitating athletes after ACL injury – if one works on increasing for example their quadriceps strength, one is ‘going against’ a natural protective mechanisms ‘unlocked’ by the ACL injury, and one may be paradoxically increasing the chances of future damage to the athlete by the very rehabilitation one is trying to help them by doing it, and one should perhaps rather be ‘rehabilitating’ them by working on their psychological mindset so that they are able to come to terms with the concept of permanent loss of some function of their injured knee and the need to potentially look for alternative sporting outlets or methods of earning their salaries.

The wonderful period of my life as a PhD student back in the early 1990’s, learning about these exquisite neuromuscular protective mechanisms surrounding the knee joint that are ‘activated’ after knee ligament injury (and potentially meniscal injury too), started a lifelong work ‘love affair’ with the brain and the regulatory mechanisms controlling the different and varied functions of the body, that has lasted to this day, and ‘unlocked’ a magical world for me of neural pathways and complex control processes that has ensured for me a lifetime without boredom and never a moment when I don’t have something to ponder on, apart from initiating an amazing ‘journey’ trying to understand how ‘it all works’. But this scientific exploration has not helped me fix my knee joint after the injury all those years ago – my left leg has never been the same again after that injury which required a full meniscectomy eventually as treatment, and still swells up if I run at all and even if my cycle rides are too long, and the muscles around the affected knee have never been as strong as they were no matter how much gym I do for them. So by understanding more about the nature of the mechanisms of response to something as major as anterior cruciate ligament knee injury, I have also come to understand more about the concepts of fate and acceptance of things, and that a single bad landing (or indeed having one beer too many leading to that bad landing) can create consequences that there are no ‘going back’ from, and that will change one’s life forever. After a bad knee injury, nature has given us the capacity for a ‘second chance’ by having these redundant protective mechanisms, but that second chance is designed to work at a slower and more relaxed pace, and with the caution of experience and the conservatism the injury engenders, rather than with the freedom of expression that comes with youth and the feeling of invincibility associated with it. Rivers do not flow upstream, and we don’t get any younger as each day passes, and our knee joints sadly will never be the same again after major injury, despite the best surgery and rehabilitation that one gets and does for them. Nature ensures this ‘reduction in capacity’ happens paradoxically for our own ‘good’, and the biggest challenge for clinicians is to understand this and convey that message to the athletes they treat, and for athletes it is to accept this potential ‘truism’ too, and let go of their sporting ambitions and find a quieter, more sedate life sitting on the bank of the river they used to ride the flow of prior to suffering their knee injury. But please left knee, let me have a few more good bike rides in the cool morning air far from the madding crowd, before you pack up completely!

Athlete Pre-Screening For Cardiac And Other Clinical Disorders – Is It Beneficial Or A Classic Example Of Screening And Diagnostic Creep

Last week the cycling world was rocked by the death of an elite cyclist, who died competing in a professional race of an apparent heart attack. A few years ago when living in the UK, the case of a professional football player who collapsed in the middle of a game as a result of having a heart attack, and only survived thanks to the prompt intervention of pitch-side Sports Medicine Physicians and other First Aid folk received a lot of media attention, and there were calls for increased vigilance and screening of athletes for heart disorders. Many years ago, one of my good friends from my kayaking days, Daniel Conradie, who apart from being a fantastic person won a number of paddling races, collapsed while paddling in the sea and died doing what he loved best of an apparent heart attack. Remembering all of these incidents got me thinking of young folk who die during sporting events, and if we clinical folk can prevent these or at least pick up potential risk factors in them before they do sport, which is known as athlete screening, or pre-screening of athlete populations, and which is still a controversial concept and is not uniformly practiced across countries and sports for a variety of reasons.

Screening as a general concept is defined as a strategy used in populations to identify the possible presence of an ‘as-yet-undiagnosed’ disorder in individuals who up to the point of screening have not presented or reported either symptoms (what one ‘feels’ when one is ill) or signs (what one physically ‘presents with’ / what the clinician can physically see, feel or hear when one is ill). Most medicine is about managing patients who present with a certain disorder or symptom complex who want to be cured or at least treated to retain an optimal state of functioning. Screening for potential disorders is as described a strategic method of pre-emptively diagnosing a potential illness or disorder, in order to treat it before it manifests in an overt manner, in the hope of reducing later morbidity (suffering as a result of an illness)and mortality (dying as a result of the illness) in those folk being screened. It is also enacted to reduce the cost and burden of clinical care which would be the result of the illnesses not being picked up until it is too late to treat them conservatively with lifestyle related or occupational changes, and costly medical interventions are needed which put a drain on the resources of the state or organizing body which consider the need for screening in the first place. Universal screening involves screening all folk in a certain selected category (such as general athlete screening), while case finding screening involves screening a smaller group of folk based on the presence of identified risk factors in them, such as if a sibling is diagnosed with cancer or a hereditary disorder.

For a screening program to be deemed necessary and effective, it has to fulfil what are known as Wilson’s screening criteria – the condition should be an important health problem, the natural history of the condition should be understood, there should be a recognisable latent or early symptomatic stage, there should be a test which is easy to perform and interpret and is reliable and sensitive (not have too many false positive or false negative results), the resultant treatment of a condition diagnosed by the condition should be more effective if started early as a result of screening-related diagnosis, there should be a policy on who should be treated if they are picked up by the screening program, and diagnosis and treatment should be cost-effective, amongst other criteria. Unfortunately, there are some ‘side-effects’ of screening programs. Overscreening is when screening occurs as a resultant of ‘defensive’ medicine (when clinicians screen patients simply to prevent themselves being sued in the future if they miss a diagnosis) or physician financial bias, where physicians who stand to make financial gain as a result of performing screening tests (sadly) advocate large population screening protocols in order to make a personal profit from them. Screening creep is when over time recommendations for screening are made for populations with less risk than in the past, until eventually the cost/benefit ration of doing them becomes less than marginal, but they are continued for the same reasons as for overscreening. Diagnostic creep occurs when over time, the requirements for making a diagnosis are lowered with fewer symptoms and signs needed to classify someone as having either an overt disease, or when folk are diagnosed as having a ‘pre-clinical’ or ‘subclinical’ disease. Patient demand is when patients push for screening of a disease or disorder themselves after hearing about them and being concerned about their own or their family’s welfare. All of these contribute to making the implementation of a particular screening program to be almost always a controversial process which requires careful consideration and an understanding of one’s own personal (often subconscious) biases when making decisions related to screening or not screening populations either as a clinician, health manager or member of the public.

Regarding specifically athlete screening, there is still a lot of controversy regarding who should be screened, what they should be screened for, how they should be screened, and who should manage the screening process. Currently, to my knowledge, Italy is the only country in the world where there is a legal requirement for pre-screening of athlete populations and children before they start playing sport at school (including not just physical examination but also ECG-level heart function analysis). In the USA, American Heart Association guidelines (history, examination, blood pressure and auscultation – listening to the heart with a stethoscope – of heart sounds) are recommended but practice differs between states. In the UK, athlete screening is not mandatory, and the choice is left up to different sporting bodies. In the Nordic countries, screening of elite athletes is mandated at the government level, but not all athlete populations as per what happens in Italy. There is ongoing debate about who should manage athlete screening in most countries, with some folk feeling it should be controlled at government level and legislated accordingly, other folk suggesting it should be controlled by professional medical bodies such as the American Heart Association in the USA or the European Society of Cardiology in Europe, while other folk believe it should be controlled by the individual sporting bodies which manage each different sporting discipline or even separately by the individual teams or schools that want to protect both the athletes and themselves by doing so. Obviously who pays for the screening factor is a large factor in these debates, and perhaps there is no unanimity in policy across countries, clinical associations and sporting bodies as described above, because of this.

The fact that there is no clear world-wide policy on athlete screening is on the one hand surprising, given the often emotional calls to enact it each time a young athlete dies, and also because the data from Italian studies has shown that the implementation of their all-population screening programs has reduced the incidence of sudden death in athletes from around 3.5/100 000 to around 0.4/100 000 (for those interested these data are described in a great study by Domenico Corrado and colleagues in the journal JAMA). But, the data described also suggests that there is a relatively low mortality rate to start with – from the above figures of 100 000 folk playing sport, only 3.5 of these died when playing sport before the implementation of screening, and a far higher number of folk die each day from a variety of other clinical disorders. The number of folk ‘saved’ is also very small in relation to the cost – a study by Amir Halkin and colleagues calculated that based on cost-projections of the Italian study, a 20 year program similar to that conducted by the Italians over 20 years of ECG testing of young competitive athletes would cost between 51 and 69 billion dollars and would save around 4800 lives, and the cost therefore per life saved was likely to range between 10 and 14 million dollars. While each life lost is an absolute tragedy both for that person and their family and friends, most lawmakers and government / governing bodies would surely think very carefully before enacting such expensive screening trials, with such low cost/benefit ratios, again with high burdens of other diseases that require their attention and funds on a continuous basis to be managed in parallel with athlete deaths. So from this ‘pickup’ rate and cost/benefit ratio perspective one can see there is already reason for concern regarding the implementation of broad screening trials for athlete populations.

Of equal concern is that of the level of both false negative and false positive tests associated with athlete screening. False negatives occur when tests do not pick up underlying abnormalities or problems, and in the case of heart screening, if one does not include ECG evaluation in the testing ‘battery’ there is often a high rate of false negative results described for athlete testing. Even using ECG’s are not ‘fail-proof’, and some folk advocate that heart-specific testing should include even more advanced testing than ECG can offer, including ultrasound and MRI based heart examination techniques, but these are very expensive and even less cost effective than those described above. False positives occur when tests diagnose a disorder or disease in athletes that is not clinically relevant or indeed does not exist. In athletes this is a particular problem when screening for heart disorders, as doing exercise routinely is known to often increase heart size to cope with the increased blood flow requirements which are part of any athletic endeavour, and this is called ‘athlete’s heart’. One of the major causes of sudden death is a heart disorder known as hypertrophic cardiomyopathy, where the heart pathologically enlarges or dilates, and it is very difficult to tell the difference on most screening tests between athletes heart and hypertrophic cardiomyopathy, with several folk diagnosed as having the latter and prevented from doing sport, when their heart is ‘normally’ enlarged rather than pathologically as a result of their sport participation. A relevant study of elite athletes in Australia by Maria Brosnan and colleagues found that when testing them using ECG level heart test, of 1197 athletes tested, 186 of these were found to have concerning ECG results (in their studies using updated ECG pathology criteria this number dropped to 48), but after more technically advanced testing of these concerning cases, only three athletes were found to have heart pathology that required them to stop their sport participation, which are astonishing figures from a potential false positive perspective. Such false-positive tests can result in potential loss of future sport related earnings or other sport participation related benefits.

Beyond false-negative and false-positive tests, there are a number of other factors which ensure that mass athlete screening remains controversial. For example, Erik Solberg and colleagues reported that while the majority of athletes were happy to undergo ECG and other screening, 16% of football players were scared that the pre-screening would have consequences to their own health, while 13% of them were afraid of losing their licence to play football, and 3% experienced overt distress during pre-screening itself because of undergoing the tests per se. The issue of civil liberties versus state control therefore needs to come into consideration in debates such as screening of athletes as a ‘blanket’ requirement if it is enacted. While most athlete screening programs and debate focusses on heart problems, there are a number of other non-cardiac causes of sudden death in athletes, such as exercise-induced anaphylaxis (an acute allergic response exacerbated by exercise participation), exercise-associated hyponatremia, exertional heat illness, intracranial aneurysms and a whole lot of other clinical disorders, and the debate is further complicated by whether these ‘other’ disorder should be included in the screening process. Furthermore, most screening programs focus on young athletes, while a large number of older folk begin doing sport at a later age, often after a long period of sedentary behaviour, and these older ‘new’ or returning sport enthusiasts are surely at an even higher risk of heart-related morbidity or mortality during exercise, and therefore one needs to think of whether screening should incorporate such folk too. However, whether there should be older age specific screening for a variety of clinical disorders is as hotly debated and controversial as it is young athlete screening, and adding screening of them for exercise specific potential issues surely complicates the matter to an even greater degree, even if an argument can be made that it is surely needed.

In summary therefore, screening of athletes for clinical disorders that may harm or even kill them during their participation in the sport they perform is still a very controversial area of both legislation and practice. There is an emotional pain deep in the ‘gut’ each time one hears of someone dying in a race, and a feeling that as a clinician or person that one should do more, or more should be done to ‘protect them from themselves’ using screening as the tool to do so. But given the low cost/benefit ratio from both a financial and ‘pickup’ perspective, it is not clear if making a country-wide decision to conduct athlete screening is not an example of both screening and diagnostic creep, or if athlete screening satisfies Wilson’s criteria to any sufficient degree. If I was a government official, my answer to whether I would advocate country-wide screening would be no based on the low cost/benefit ratio. If I was a member of a medical health association, to this same question I would answer yes, both from an ethical and a regulatory perspective, as long as my association did not have to foot the bill for it. If I was head of a sport governing body, I would say yes to protect the governing body’s integrity and to protect the athletes I governed, as long as I did not have to foot the bill for it. If I was a clinical researcher, I would say no, as we do not know enough about the efficacy of athlete screening and because there is a too high level of false-positive and false-negative results. If I was a sports medicine doctor I would say yes, as this would be my daily job, and I would benefit financially from it. If I was an athlete, I would be ambivalent, saying yes from a self-protection perspective, but saying no from a job and income protection perspective. If I was the father of a young athlete, I would say yes, to be sure my child is safe and would not be harmed by playing sport, but I would also worry about the psychological and social aspects if he or she was prohibited from playing sport as a result of a positive heart or other clinical screening test. It is in these conflicting answers I myself give when casting myself in these different roles, to which I am sure if each of you reading this article answered yourself would also similarly give a wide array of different responses, is perhaps where the controversy in athlete screening originates and what will make it always contentious. I do think that if as a newly qualified clinician back then in our paddling days, if I had tested my great friend Daniel Conradie’s heart function and found something that was worrying and suggested he stop paddling because of it, he would probably have told me to ‘take a hike’ and continued paddling even with such knowledge. I am sure as a young athlete I would have done similar if someone had told me they were worried about something in my health profile back then but were not one hundred percent sure of it having a negative future consequence on my sporting activity and future life prospects. Athlete screening tests and decisions related to them will almost always be about chance and risk, rather than certainty and conclusive determination of outcomes. To race or not race, based on a chance of perhaps being damaged by racing, or even dying, given the outcome of a test that warns you, but may be either false-positive or false-negative, that is the question. What would you do in such a situation, as an athlete, as a governing body official, or as a legislator? That’s something to ponder that doesn’t seem to have an easy answer, no matter how tragic it is to see someone young dying while doing what they love doing best.

The Sensation Of Fatigue – A Complex Emotion Which Is Vital For Human Survival

After a couple of weeks back at work after a great Christmas season break, I have noticed this week a greater than normal level of fatigue than I normally ‘feel’ at the end of a routine working week. After one of the hottest December months on record in my current home town, where temperatures for a while were consistently hovering around forty degrees Celsius, we have had a wonderful rainy, cool period, and I have noticed that I feel less fatigued in the cooler environment, and routine daily activities seem ‘easier’ to perform than when it was excessively hot. As part of a New Year’s resolution ‘action plan’ to improve my level of fitness, I have increased my level of endurance exercise, and as always have enjoyed the sensation of fatigue I feel towards the end of each long (though I know that ‘long’ is relative when compared to younger, more fitter folk) bike ride I do as part of this ‘fitness’ goal. All of these got me thinking of the sensation of fatigue, an emotional construct which I spent a great many years of my research career trying to understand, and which still is very difficult to define, let alone work out its origins and mechanisms of elicitation in our physical body structures and mental brain functions.

As described in these three very different examples from my own life, fatigue is experienced by all folk on a regular basis in a variety of different conditions and activities. Perhaps because of this, there are many different definitions of fatigue. In clinical medicine practice, fatigue is defined as a debilitating consequence of a number of different systemic diseases (or paradoxically the treatment by a variety of different drugs) or nutritional deficits. In exercise physiology, fatigue is defined as an acute impairment of exercise performance, which leads to an eventual inability to produce maximal force output as a consequence of metabolite accumulate or substrate depletion. In neurophysiology, fatigue is defined as a reduction of motor command from the brain to the active muscles resulting in a decrease in force or tension as part of a planned homeostatic process to prevent the body from damage which could result from too high a level of activity or too prolonged activity. In psychology, fatigue is defined as an emotional construct – a conscious ‘sensation’ generated by the cognitive appraisal of changing body or brain physiological activity which is influenced by the social environment in which the activity changes occur, and the mood status, temperament and background of the person ‘feeling’ these physiological changes. It will be evident from all of these different definitions how complex fatigue is an ‘entity’ / functional process, and how hard it is for even experts in the field to describe to someone asking about it what fatigue is, let alone understand it from a research perspective.

A number of different physical factors have been related to the development of the sensation of fatigue we all ‘feel’ during our daily life. During physical activity, it has been proposed that changes in the body related to the increased requirements of the physical exertion being performed cause the sensation of fatigue to ‘arise’. These include increased heart rate, increased respiratory rate, increased acid ‘build up’ in the muscles, reduced blood glucose or muscle or liver glycogen, or temperature changes in the body, particularly increased heat build-up – though for each study that shows one of these factors is ‘causal’ of the sensation of fatigue, one can find a study that shows that each of these specific factors is not related to the development of the sensation of fatigue. It has also been proposed that changes in the concentration of substrates in the brain structures associated with physical or mental activity are related to the sensation of fatigue – such as changes in neurotransmitter levels (for example serotonin, acetylcholine, glutamate), or changes in the nutrients supplied to the brain such as glucose, lactate or branched chain amino acids. But, again, for each study whose findings support these hypotheses, there are studies that refute such suggestions. It has also been suggested that a composite ‘aggregation’ of changes in all these body and brain factors may result in the development of the sensation of fatigue, via some brain process or function that ‘valences’ each in a fatigue ‘algorithm’, or via intermediate sensations such as the sensation of breathlessness associated with increased ventilation, the sensation of a ‘pounding’ heart from cardiac output increases, the sensation of being hot and sticky and sweating which result from temperature increases in the body, and / or the sensation of pain in muscles working hard, all of which are themselves ‘aggregated’ by brain structures or mental functions to create the complex sensation we know and describe as fatigue.

Which physical brain structures are involved in the creation of the sensation of fatigue is still not known, and given the complexity of the factors involved in its generation, as described above, large areas of the brain and a number of different brain systems are likely to be involved – the motor cortex as muscle activity is often involved, the sensory cortex as signals from changes in activity in numerous body ‘parts’ and functions are ‘picked up’ and assimilated by the brain, the frontal cortex as cognitive decision making on the validity of these changes and the need for potential changes in activity as a result of this ‘awareness’ of a changed state is required, the hippocampus / amygdala region as the current changes in physiological or mental activity must be ‘valenced’ against prior memories of similar changes in the past in order to make valid ‘sense’ of them as they currently occur, and the brainstem as this is the area where ventilation, heart function and a variety of other ‘basic’ life maintaining functions are primarily controlled, for example, amongst many other potential brain areas. We don’t know how the function of different brain areas is ‘integrated’ to give us the conscious ‘whole’ sensation we ‘feel’, and until we do so, it is difficult to understand how the physical brain structures ‘create’ the sensation of fatigue, let alone the ‘feeling’ of it.

How the mental ‘feeling’ of fatigue is related to these physical body and brain change ‘states’ is also challenging for us research folk to understand. Clearly some ‘change’ in structures, baseline physical values or mental states by whatever induces the fatigue process, be it physical or mental exertion or illness, is required for us to ‘sense’ these and for our brain and mental functions to ‘ascribe’ the sensation of fatigue to these changed states. It has previously been shown that the sensation of fatigue which arises during exercise is related to the distance to be covered, and increases as one gets closer to the finish line. While this sounds obvious, as one would expect the body to become more ‘changed’ as one exercises for a longer period, it has been shown that when folk run at the same pace for either five or ten kilometres, despite their pace being identical in both, at the 4km mark in the 5 km race the rating these folk give for the sensation of fatigue is higher than it is at 4km of the 10 km race, which is ‘impossible’ to explain physiologically, and suggests that folk ‘set’ their perceptual apparatus differently for the 5 and 10 km race, based on how far they have to go (what H-V Ulmer described as teleoanticipation), by changing the ‘gain’ of the relationship between the signals they get from their body depending on how far they plan to go. Two great South African scientists, Professor Ross Tucker of the University of Free State, and Dr Jeroen Swart of the University of Cape Town, have expanded on this by suggesting that there is a perceptual ‘template’ for the sensation of fatigue in the brain, and the sensation of fatigue is ‘created’ in an organized, pre-emptive ‘way’ by mental / cognitive processes in the brain, and the sensation of fatigue is ‘controlled’ by this template depending on the distance and / or duration of a sporting event. If something unexpected happens during an event, like a sudden drop in temperature, or a competitor that goes faster than expected, this will create an unexpected ‘change’ in signals from the body and requirements of the race, and the sensation of fatigue will become more pronounced and greater than what is expected at that point in the race, and one will slow down, or change plans accordingly. Ross and Jeroen’s fascinating work show how complex the mental component of the sensation of fatigue and its ‘creation’ by brain structures is.

There are multiple other factors which are involved in the generation of the sensation of fatigue, or of its modulation. I did my medical PhD (an MD) on chronic fatigue syndrome which developed in athletes who pushed themselves too hard until they eventually physically ‘broke down’ and developed the classical fatigue symptoms associated with chronic fatigue, where they felt fatigue even when not exercising, which was not relieved by prolonged periods of rest. These athletes clearly pushed themselves ‘through’ their fatigue symptoms on a regular basis until they damaged themselves. As one of the pioneer and world-leading experts in the fatigue field, Professor Sam Marcora, has pointed out, one’s ambitions and drives and ‘desire for success’ are a strong indicator both of the level of the symptom of fatigue folk will ‘feel’, and how they resist these symptoms. In these chronically fatigued folk we studied, something in their psychological makeup induced them either to constantly continue exercising despite the symptoms of fatigue, or made them ‘feel’ less sensations of fatigue for the same work-rate (assuming their fitness levels and physical capacity was similar) to most folk who do not experience this syndrome (the vast majority of folk). To make the matter even more complex, these folk with chronic fatigue described severe sensations of fatigue at rest, but when we put them on a treadmill, some of them paradoxically felt less, rather than more, sensations of fatigue when running as compared to resting, and their extreme sensations of fatigue returned (to an even greater degree) in the rest period after they completed the running bout. Furthermore, if one gives stimulants to folk when they exercise, such as caffeine, it appears to reduce the ‘awareness’ of the sensations of fatigue. Sam is doing some interesting work currently looking at the effect of caffeine on attenuating the sensation of fatigue – as did Dr Angus Hunter several years ago – and thereby using it as a ‘tool’ to get folk to exercise more ‘easily’ as they appear to ‘feel’ fatigue less after ingesting caffeine. All this shows again that the sensation of fatigue is both a very complex emotion, and a very ‘labile’ one at that, and can change, and be changed, by both external factors such as these stimulants, and internal factors such as one’s drive or ‘desire’ to resist the sensation of fatigue as they arise, or even ‘block them out’ before they are consciously generated. More research, and very advanced research techniques, will be required for us to clearly understand how and such potential ‘blockages’ of the sensation of fatigue happen, if they indeed occur.

The sensation of fatigue is therefore an immensely complex ‘derivative’ of a number of functions, behaviours, and psychological ‘filters’, and what we finally ‘feel’ as fatigue is ‘more’ than a simple one-to-one description of some underlying change in our physical body and brain that requires adjustment or attenuation. The sensation of fatigue is clearly a protective phenomenon designed to slow us down when we are exercising too hard or too long in a manner that may damage our body, or when we are working too hard or too long and need a ‘time out’, or when the environment one is performing activities of daily living in may be harming one. But there are usually more complex relationships and reasons for the occurrence of the sensation of fatigue than what on the surface may appear to be the case. For example, the increase in work related fatigue I feel is surely related not just to the fact that it is the end of a busy week – it is perhaps likely to be related to a ‘deep’ yearning to be back on holiday, or to the fact that my mind is not ‘hardened’ yet to my routine daily work requirements, or has been ‘softened’ by the holiday period so that now I feel fatigue ‘more’ than is usual. In a few weeks time this will surely be attenuated as the year progresses and my weekly routines, which have been ‘honed’ over many years of work, are re-established, and I will feel the ‘usual’ rather than excessive symptoms of fatigue as always on Thursdays and Fridays. The extreme feeling of fatigue I felt during the very hot December month may also be related to some subconscious ‘perception’ that my current living environment is perhaps not optimal for me lifestyle wise for a long term living basis, and this ‘valenced’ how I perceived the environment as one of extreme heat and therefore extreme (and greater than expected) fatigue last month. And that I am ‘enjoying’ the sensations of fatigue I feel when exercising may mean that I am perhaps not pushing my exercise bouts as hard as I could, and need to go harder, or that my mind and body is setting a pace that feels enjoyable both so I continue doing it, or to protect me from a potential heart attack if I go harder. All of these may be the case, or equally, all of these could be mere speculation – the science folk in the area of fatigue have a big mountain to climb, and many more hours in the lab, before we more fully understand the complex emotion which the sensation of fatigue is, and how and from where it arises and is controlled.

A time may come when Sam Marcora and other excellent research colleagues like him find the ‘magic bullet’ that will ‘banish’ the sensation of fatigue, and we will be able to work harder and exercise longer because of it. But then would the cold drink after exercise taste so good, or the feeling of accomplishment one gets at the end of a long exercise bout as a result of resisting the sensation of fatigue long enough to achieve one’s goals for the particular exercise bout one has just completed still occur? This is something to ponder on, when fatigued, as I am now after two hours of writing, as I sip my cup of coffee, and wait for my ‘energy’ to return so I can begin the next task of a routine Sunday, whether it be cycling with the kids, walking the dog, or any other fatigue-removing activity as I prepare for the next fatiguing cycle which is the work and sport week ahead!

Control of Movement And Action – Technically Challenging Conceptual Requirements And Exquisite Control Mechanisms Underpin Even Lifting Up Your Coffee Cup

During the Christmas break we stayed in Durban with my great old friend James Adrain, and each morning I would as usual wake around 5.00 am and make a cup of coffee and sit outside in his beautiful garden and reflect on life and its meaning before the rest of the team awoke and we set off on our daily morning bike-ride. One morning I accidentally bumped my empty coffee mug, and as it headed to the floor, my hand involuntarily reached out and grabbed it, saving it just before it hit the ground. During the holiday I also enjoyed watching a bit of sport on the TV in the afternoons to relax after the day’s festivities, and once briefly saw highlights of the World Darts Championship, which was on the go, and was struck by how the folk competing seemed with such ease, and with apparent similar arm movements when throwing each dart, to be able to hit almost exactly what they were aiming for, usually the triple twenty. When I got back home, I picked up from Twitter a fascinating article on movement control posted by one of Sport Sciences most pre-eminent biomechanics researchers, Dr Paul Glazier, written by a group of movement control scientists including Professor Mark Latash, who I regard as one of the foremost innovative thinkers in the field of the last few decades. All of these got me thinking about movement control, and what must be exquisite control mechanisms in the brain and body which allowed me to in an instant plan and enact a movement strategy which allowed me to grab the falling mug before it hit the ground, and allowed the Darts Championship competitors to guide their darts, using their arm muscles, with such accuracy to such a small target a fair distance away from them.

Due to the work over the last few centuries of a number of great movement control researchers, neurophysiologists, neuroscientists, biomechanists and anatomists, we know a fair bit about the anatomical structures which regulate movement in the different muscles of the body. In the brain, the motor cortex is the area where command outflow to the different muscles is directly activated, and one of the highlights of my research career was when I first used transcranial magnetic stimulation, working with my great friend and colleague Dr Bernhard Voller, where we able to make muscles in the arms and leg twitch by ‘firing’ magnetic impulses into the motor cortex region of the brain by holding an electromagnetic device over the scalp above this brain region. The ‘commands for action’ from the motor cortex travel to the individual muscles via motor nerves, using electrical impulses in which the command ‘code’ is supplied to the muscle by trains of impulses of varying frequency and duration. At the level of the individual muscles, the electrical impulses induce a series of biochemical events in and around the individual muscle fibres which cause them to contract in an ‘all or none’ way, and with the correct requested amount of force output from the muscle fibre which has been ‘ordered’ by the motor cortex in response to behavioural requirements initiated in brain areas ‘upstream’ from the motor cortex, such as one’s eyes picking up a falling cup and ‘ordering’ reactive motor commands to catch the cup. So while even though the pathway structures from the brain to the muscle fibres are more complex than I have described here – there are a whole host of ‘ancient’ motor pathways from ‘lower’ brainstem areas of the brain which also travel to the muscle or synapse with the outgoing motor pathways, whose functions appear to be redundant to the main motor pathways and may still exist as a relic from the days before our cortical ‘higher’ brain structures developed – we do know a fair bit about the individual motor control pathways, and how they structurally operate and how nerve impulses pass from the brain to the muscles of the body.

However, like everything in life, things are more complex than what is described above, as even a simple action like reaching for a cup, or throwing a dart, requires numerous different muscles to fire either synchronously and / or synergistically, and indeed just about every muscle in the body has to alter its firing pattern to allow the body to move, the arm to stretch out, the legs to stabilize the moving body, and the trunk to sway towards the falling cup in order to catch it. Furthermore, each muscle itself has thousands of different muscle fibres, all of which need to be controlled by an organized ‘pattern’ of firing to even the single whole muscle. This means that there needs to be a coordinated pattern of movement of a number of different muscles and the muscle fibres in each of them, and we still have no idea how the ‘plan’ or ‘map’ for each of these complex pattern of movement occurs, where it is stored in the brain (as what must be a complex algorithm of both spatial and temporal characteristics to recruit not only the correct muscles, but also the correct sequence of their firing from a timing perspective to allow co-ordinated movement), and how a specific plan is ‘chosen’ by the brain as the correct one from what must be thousands of other complex movement plans. To make things even more challenging, it has been shown that each time one performs a repetitive movement, such as throwing a dart, different synergies of muscles and arm movement actions are used each time one throws the dart, even if to the ‘naked’ eye it appears that the movement of the arm and fingers of the individual throwing the dart seems identical each time it is thrown.

Perhaps the scientist that has made the most progress in solving these hugely complex and still not well understood control process has been Nikolai Bernstein, a Russian scientist working out of Moscow between the 1920’s and 1960’s, and whose work was not well known outside of Russia because of the ‘Iron Curtain’ (and perhaps Western scientific arrogance) until a few decades ago, when research folk like Mark Latash (who I regard as the modern day equivalent of Bernstein both intellectually and academically) translated his work into English and published it as books and monographs. Bernstein was instructed in the 1920’s to study movement during manual labour in order to enhance worker productivity under the instruction of the communist leaders of Russia during that notorious epoch of state control of all aspects of life. Using cyclographic techniques (a type of cinematography) he filmed workers performing manual tasks such as hitting nails with hammers or using chisels, and came to two astonishing conclusions / developed two movement control theories which are astonishingly brilliant (actually he developed quite a few more than the two described here), and if he was alive and living in a Western country these would or should have surely lead to him getting a Nobel prize for his work. The first thing he realized was that all motor activity is based on ‘modelling of the future’. In other words, each significant motor act is a solution (or attempt at one) of a specific problem which needs physical action, whether hitting a nail with a hammer, or throwing a dart at a specific area of a dartboard, or catching a falling coffee cup. The act which is required, which in effect is the mechanism through which an organism is trying to achieve some behavioural requirement, is something which is not yet, but is ‘due to be brought about’. Bernstein suggested that the problem of motor control and action therefore is that all movement is the reflection or model of future requirements (somehow coded in the brain), and a vitally useful or significant action cannot either be programmed or accomplished if the brain has not created pre-requisite directives in the forms of ‘maps’ of the future requirements which are ‘lodged’ somewhere in the brain. So all movement is in response to ‘intent’, and for each ‘intent’ a map of motor movements which would solve this ‘intent’ is required, a concept which is hard enough to get one’s mind around understanding, let alone working out how the brain achieves this or how these ‘maps’ are stored and chosen.

The second of Bernstein’s great observations was what is known as motor redundancy (Mark Latash has recently suggested that redundancy is the wrong word, and it should have been known as motor abundancy), or the ‘inverse dynamics problem’ of movement. When looking at the movement of the workers hitting a nail with a hammer, he noticed that despite them always hitting the nail successfully, the trajectory of the hammer through the air was always different, despite the final outcome always being similar. He realized that each time the hammer was used, a different combination of arm motion ‘patterns’ was used to get the hammer from its initial start place to when it hit the nail. Further work showed that each different muscle in the arm was activated differently each time the hammer was guided through the air to the nail, and each joint moved differently for each hammer movement too. This was quite a mind-boggling observation, as it meant that each time the brain ‘instructed’ the muscles to fire in order to control the movement of the hammer, it chose a different ‘pattern’ or ‘map’ of coordinative muscle activation of the different muscles and joints in the arm holding the hammer for each hammer strike of the nail, and that for each planned movement therefore, thousands of different ‘patterns’ or ‘maps’ of coordinated muscle movement must be stored, or at least available to the brain, and a different one appears to be chosen each time the same repetitive action is performed. Bernstein therefore realized that there is a redundancy, or abundancy, of ‘choice’ of movement strategies available to the brain for each movement, let alone complex movement involving multiple body parts or limbs. From an intelligent control systems concept, this is difficult to get one’s head around, and how the ‘choice’ of ‘maps’ is made each time a person performs a movement is still a complete mystery to movement control researchers.

Interestingly, one would think that with training, one would reach a situation where there would be less motor variability, and a more uniform pattern of movement when performing a specific task. But, in contrast, the opposite appears to occur, and the variability of individual muscle and joint actions in each repetitive movement appears to maintain or even increase this variability with training, perhaps as a fatigue regulating mechanism to prevent the possibility of injury occurring from potentially over-using a preferentially recruited single muscle or muscle group. Furthermore, the opposite appears to happen after injury or illness, and after for example one suffers a stroke or a limb ligament or muscle tear, the pattern of movements ‘chosen’ by the brain, or available to be chosen, appears to be reduced, and similar movement patterns occur during repetitive muscle movement after such an injury, which would also be counter-intuitive in many ways, and is perhaps related to some loss of ‘choice’ function associated with injury or brain damage, rather than damage to the muscles per se, though more work is needed to understand this conceptually, let alone functionally.

So, therefore, the simple actions which make up most of our daily life, appear to be underpinned by movement control mechanisms of the most astonishing complexity, which we do not understand well (and I have not even mentioned the also complex afferent sensory components of the movement control process which adjust / correct non-ballistic movement). My reaction to the cup falling and me catching it was firstly a sense of pleasure that despite my advancing age and associated physical deterioration I still ‘had’ the capacity to respond in an instant and that perhaps the old physical ‘vehicle’ – namely my body – through which all my drives and dreams are operationalized / effected (as Freud nicely put it) still works relatively okay, at least when a ‘crisis’ occurs such as the cup falling. Secondly I felt the awe I have felt at many different times in my career as a systems control researcher at what a brilliant ‘instrument’ our brains and bodies as a combination are, and whatever or whoever ‘created’ us in this way made something special. The level of exquisite control pathways, the capacity for and of redundancy available to us for each movement, the intellectual capacity from just a movement control perspective our brain possesses (before we start talking of even more complex phenomena such as memory storage, emotional qualia, and the mechanisms underpinning conscious perception) are staggering to behold and be aware of. Equally, when one sees each darts player, or any athlete performing their task so well for our enjoyment and their success (whether darts players can be called ‘athletes’ is for another discussion perhaps), it is astonishing that all their practice has made their movement patterns potentially more rather than less variable, and that this variability, rather than creating ‘malfunction’, creates movement success and optimizes task outcome capacity and performance.

It is in those moments as I had when sitting in a beautiful garden in Durban in the early morning of a holiday period, reflecting on one’s success in catching a coffee cup, that creates a sense of wonder of the life we have and live, and what a thing of wonder our body is, with its many still mystical, complex, mostly concealed control processes and pathways regulating even our simple movements and daily tasks. In each movement we perform are concealed a prior need or desire, potentially countless maps of prospective plans for it, and millions of ways it can be actualized, from which our brain chooses one specific mechanism and process. There is surely magic in life not just all around but in us too, that us scientist folk battle so hard to try and understand, but which are to date still impenetrable in all their brilliance and beauty. So with a sigh, I stood up from the table, said goodbye to the beautiful garden and great friends in Durban, and the relaxing holidays, and returned to the laboratory at the start of the year to try and work it all out again, yet knowing that probably I will be back in the same place next year, reflecting on the same mysteries, with the same awe of what has been created in us, and surely still will no further to understanding, and will still be pondering, how to work it all out – though next year I will be sure to be a bit more careful where I place my finished coffee cup!

Elite Athlete Performance And Super-Achievers In Sport – Is The Essential Ingredient For Success An Inner Mongrel Or Unrequited Child Rather Than Purely Physical Capacity

Above my desk at home is a picture of my great University friend, Philip Lloyd, and I in our paddling days many years ago completing a race, shortly before he switched to mountain climbing, a sport where he achieved great success and pioneered a number of astonishingly difficult routes in a very short space of time before tragically falling to his death when a safety rope failed on a high mountain in Patagonia. Each year I enjoy watching the Tour De France and I am awed by the cyclist’s capacity to sustain pain for so long in the high mountain stages, and their capacity to train for huge amounts of time on a daily basis to ensure they get to the race in peak condition. This week I read about the extent of doping in sport, and wondered not only how they appeared to have gotten away for it for so long, but how so many athletes could have and do take drugs when there is so much evidence how potentially harmful performance enhancing drugs can be to the body of those that take them. All of these got me thinking about why people push themselves to such limits to win races, and what ‘separates’ these race winners and super-achievers in sport, including those that in order to win become dope takers, not only from their less successful peers, but the vast majority of the human population to whom cycling a few kilometres would be regarded as a big effort and achievement, and who would probably prefer to have a cup of coffee while reading a newspaper as their activity of preference.

It is clearly necessary for folk who are successful in sport to have the ‘right’ physical characteristics in order to be able to compete at the highest level, be it the right body shape for their chosen sport, or a big lung capacity, or great muscular strength, or good balance or agility. But you can have all of these, and yet if one doesn’t have the required amount of ‘will’ to push one’s body, no matter how specially ‘designed’ it is, to train for hours on a daily basis or to push oneself to near collapse during a race itself, one will never be a ‘winner’ during athletic events. Where this ‘will’ comes from, and how it is stimulated to be maintained in the face of extreme hardship, is still not clearly understood or determined. One of the biggest ‘wow’ moments of my science career was when after doing competitive sport for many years, and studying as an academic how sport was regulated for many years after that, I realized after spending thousands of hours reading about drive and motivation theories that performing sport is an essentially abnormal activity / thing to do. That might sound strange, but this ‘wow’ moment was underpinned by the knowledge that our bodies and brain have very well defined protective mechanisms, both physically and psychologically, that protect us from damage and resist any effort to get out of our ‘comfort zones’. In the physical sciences these processes are called homeostatic mechanisms, and in psychology they are described as being part of the ‘constancy principle’. Homeostasis was defined by Claude Bernard in the 1860’s as the tendency to maintain the body in a state of relative equilibrium well away from the limits of the body’s absolute capacity using protective physiological mechanisms. The constancy principle was developed by Sigmund Freud in the early 1900’s as an offshoot of his ‘pleasure principle’ theory and central to the principle is the concept that the human being is a biological organism which strives to maintain its level of ‘excitation’ at an always ‘comfortable’ level. To achieve this goal, Freud suggested that humans avoid or seek to diminish any external stimuli which are likely to prove excessive or which will threaten our internal state of equilibrium, using mechanisms in either the conscious or subconscious mind. So when we get up and exercise, which increases our body’s metabolic rate tremendously, and mentally requires effort, we are doing something that is essentially ‘anti-homeostatic’, and which would naturally initiate a number of mechanisms designed to make us resist the desire to continue exercising, or even stop completely. The symptom of fatigue would be an obvious example of one such protective mechanism which would be an overt part of or result of either physical homeostatic or psychological constancy mechanisms.

So why do folk do sport then if there are all these protective mechanisms. A possible reason may be ‘higher order’ homeostatic / constancy mechanisms, such as material or social rewards which may result from participating in sport, which are beneficial for the long term future of the individual and are ‘valenced’ as being more ‘important’ than the short term risk of being out of one’s safety ‘zone’ when doing sport. For example, being fitter would perhaps be thought to make one more desirable for a potential mate from a biological offspring choice perspective, or if one won a race this would lead to financial gain which would again one more socially desirable, and thereby enhance the reproductive capacity of the individual participating in sport because of the perceived increased ‘esteem’ associated with being a winner. This is a very biological theory for sport and race participation, and suggests that doing sport, racing and winning are beneficial from a Darwinian perspective, and involving oneself in such events would be part of some deep-rooted ‘propagation of species’ biological drive or motive.

The problem with this biological theory is that it does not explain why people push themselves to the level of collapse in sport or why folk continue competing either with warning signs of impending physical catastrophe such as angina (chest pain associated with heart disease), or take drugs in order to improve their chances of success, or become dependent on / addicted to sport (there are numerous examples of this and it is an increasing psychopathological problem), all of which would potentially damage rather than enhance one’s future life prospects and therefore also one’s reproductive capacity, so perhaps something ‘deeper’ is involved. Albert Adler, around the time that Freud proposed the constancy theory, put forward his theory of the inferiority complex, which suggested that performing great feats were related to a sense of inferiority related to prior issues, and that one competes or performs events as a mechanism of ‘hiding’ internal perceived inadequacies or short-comings of the self after previous negative experiences (being bullied in one’s youth, or teased about physical weakness in adolescence, for example). Freud suggested that damaging events in one’s youth lead to a state of ‘ego-fragility’, where in in order to ‘block out’ painful experiences from one’s past, one ‘represses’ these damaging memories or experiences and one ‘projects’ or externalises these internal conflicts into external drives or desires which are ‘transferred’ onto something or some action that can ‘compensate’ for these early formative related issues. Therefore, for example if one feels one is the cause of one’s parent’s divorce, to compensate one spends the rest of one’s life winning races or doing well at work in order to try and ‘make up’ for the ‘damage’ one perceives one has caused at some deep subconscious level, even if one is not the cause of it, and one does not even consciously realize that one is doing something for this ‘deep’ reason. Symptoms and signs of projection and transference include fanatical attachment to projects and goals, envy and dislike for other folk who are successful or receive awards, and falling apart when failing to complete a challenge successfully – all of which are endemic and part of the ‘make-up’ of the sporting world.

If this sounds ‘odd’, or far-fetched, there are some interesting data and information that can be gleaned from athlete autobiographies and academic studies that would suggest that this theory may have a degree of veracity. Dave Collins and Aine Macnamara wrote a very interesting review a few years ago in which they suggested that ‘talent needs trauma’, and described data that would support this concept. For example, academy football players who eventually made it to become elite football players apparently have a greater number of siblings (more competition for parent’s attention) and a three times higher parental divorce rate than peers who did not reach elite level activity. They also suggested that successful footballers come from backgrounds with a higher incidence of single parent families, while rowers commonly reported an increased level of early childhood departure to boarding school (which Collins and Macnamara, rightly in my mind, suggested would be a ‘natural source of early trauma’). Looking at successful cyclists, information gleaned from their autobiographies or articles written about them describe that for example Lance Armstrong parents split up not long after he was born and he grew up with as step-father he didn’t like, Bradley Wiggins’s parents similarly split up when he was young, as did Mark Cavendish’s, and Chris Froome’s parents also separated when he was young. If all this described family history for these cycling champions, footballers and rowers is indeed true, it would be supportive of Collin’s and Macnamara’s suggestion, and that perhaps some of these athletes drive to succeed (and in Armstrong’s case to the level where he was willing to take drugs to do so) is related to some inner drive created by their challenging conditions in their youth (though of course it can be said that growing up in a divorced family environment may often be more easier than doing so in a marriage where there is continuous conflict between parents, or stifling living conditions), or is a compensation for it.

As I said earlier, us scientist folk are still not completely sure what makes an elite athlete, or why some folk push themselves to extreme levels of physical activity. My friend Phil Lloyd was interesting to me as he had such potential in all aspects of his life (and was such a great person and friend), yet he seemed to have some inner ‘edge’ that made him always restless and always want to go ‘higher’ or ‘do more’ and never seemed completely satisfied with what he had achieved – like a lot of us when we were young, there was always a more dangerous river to paddle down than the one we had just got out of, or a higher mountain that needed to be climbed. While I surely had my own demons in my youth, I remember asking Phil why he climbed these very and increasingly dangerous routes he was doing before he died. He gave several reasons, but one which always stuck with me was that when he was solo climbing ‘up there’ miles away from help and people, it felt a bit like when he was young and at boarding school as a child, and it helped him work through these memories. I did not understand his answer then, and thought he was perhaps joking with this answer, but after a career in science and reading basic psychology texts for many years, his answer eventually made sense to me (and perhaps was the ‘seed’ that led me to write this particular article). There are incredible rewards for those who achieve great success in sport. Those who do well / attain the pinnacle of success in any sport deserve our utmost admiration for what they put themselves through during races and on a daily basis when training. But perhaps there is an element that all this effort, which takes them (and me in my youth) far out of their own ‘constancy’ / homeostatic zones, is in effect in part potentially a compensation for trauma of times past which creates an ‘inner mongrel’ which refuses to give up until the ‘prize’ is won that will ‘make up’ for that past loss or trauma. In a way by doing so, perhaps (and hopefully) all these folk by winning enough will gradually attenuate the ‘unrequited child’ which may still reside in them, and reach psychological ‘peace’, and wake up one morning and choose to go and have a cup of coffee in a warm shop rather than ride a bike or kick a football for six hours in the pouring rain, and be happy and be able to feel relaxed when doing so. There is a huge energy cost and price involved over many years if indeed winning anything is related to an inner mongrel that won’t ‘keep quiet’ – and in the case of my great friend Phil, his drive to succeed in his chosen sport perhaps in part led to his death, and we never got the chance to see him reach his full potential, which he possessed in such abundance in all aspects of his work, social and sporting life, and each day I work at home I see the picture of us paddling together and feel a sadness for this lack. Equally though, if there was no inner mongrel and / or unrequited child, would there ever be winners, and would the high mountains of the world ever have been climbed? I’ll ponder that question more later today when I head off with the family to the coffee shop for our weekly Sunday coffee and newspaper routine!

Low Carb High Fat Banting Diets And Appetite Regulation – A Research Area Of Complex Causation Appears To Have Brought Out A Veritable Mad Hatters Tea Party Ensemble

Perhaps one of the astonishing things I have read in my career to date was a recent Tweet apparently written by my own previous lab boss of University of Cape Town days, now many years ago, Professor Tim Noakes. The text of this tweet included ‘Hitler was vegetarian, Wellington (Beef), Napoleon insulin resistant – Did LCHF determine future of Europe’. Tim has, in the last few years, endorsed the Low Carb / High Fat (LCHF) ‘Banting’ Diet as the salvation and ‘holy grail’ of healthy living and longevity, and appears to have recommended that everyone from athletes to children should follow the diet. As part of this diet, if I have heard / read him correctly, sugar (carbohydrate) is the ‘great evil’ and has an addictive capacity, our ancestors lived on a diet high in fat and low in carbohydrates and were as a result, according to Tim, more healthy than us contemporary folk, and our current diabetes and obesity epidemics are linked to an increase intake of sugar (but not fats, proteins or simply an absolute increase in caloric intake / portion size) in the last few decades, related to a variety of factors. All this has been astonishing to me, given that for many years when I worked in Tim’s lab, he was a strong proponent of carbohydrates / sugars as the ‘ultimate fuel source’ and wrote extensively on this, and we did a number of trials examining the potential benefits of carbohydrates which were funded by sugar / carbohydrate producing companies. While anyone can have a paradigm shift, this is one of great proportions, and given that I worked closely with Tim for a number of years (we have co-authored more than 50 research papers together, mostly in the field of activity regulation mechanisms), I have found this one, and some of the statements like in the Tweet above, to be, put conservatively, astonishing. So perhaps it would be interesting to look at some of the points raised by the folk that champion the LCHF diet and whether they have any veracity.

Firstly, one of the basic tenets of the diet is that our ancestors in pre-historic times used to use a LCHF diet and were as a result healthier because of it. Of course it is almost impossible to say with any clarity what folk ate beyond a few generations back, given that we have to rely in the period since writing started on folks written observations of what they ate, and before that, on absolutely no empirical evidence at all, apart from sociological speculation. The obvious counter-argument is that the life span has increased dramatically in the last few centuries, so while mortality rates are always multifactorial, to say that a diet used in the ancient past was beneficial is clearly difficult to accept when folk died so much younger than they did today, or that they were more healthy or lean in pre-historic days. As pointed out by Professor Johan Koeslag in my medical training days, based on the figurine the Venus of Willendorf, created in 24000-22000 BC, which depicted a female who was obese, it is as likely that folk back then were obese as it could be that they were thin. But the point is that to make any argument based on hypotheses of what was done in ancient times is specious, as we just cannot tell with any certainty what folk ate then, and it is likely that folk in ancient times ate whatever they could find, whether it was animal or plant based, in order to survive.

Based on this ‘caveman’ ideal, as nebulous as it is, the LCHF proponents have suggested that it is more ‘natural’ for the body to ‘run’ on a low carbohydrate diet, and Tim has suggested that athletes will perform better on a LCHF diet. But perhaps one of the best studies that would negate this concept was performed by my old friend and colleague, Dr Julia Goedecke, of which both Tim and I were co-authors. Julia looked at what fuels folk’s metabolism naturally ‘burnt’ as part of their metabolic profile, and found that there were some folk who were preferential ‘fat burners’ (and would perhaps do well on a high fat diet), some who were preferential ‘carbohydrate burners’ (and would perhaps do best on a high carbohydrate diet) but the large majority of folks were ‘in between’, and burnt both carbohydrates and fats as their selected fuel. If you are a ‘fat burner’ and ate carbohydrates, you may run into ‘trouble’, as equally if you are a ‘carbohdyrate burner’ and ate fats you may run into trouble similarly, but again, most folk ‘burn’ a combination of both, and the obvious inference would be that most folk would do best on a balanced diet (and of course without huge lifelong cohort studies one cannot say what ‘trouble’ either group will run into health-wise without such data).

It has also been suggested by Tim and the LCHF proponents that sugars / carbohydrates are highly addictive, and it is specifically the ingestion of this particular food source that has led to increase levels of obesity and health disorders such as type 2 diabetes in the last few decades. But, absolute caloric intake has increased over the last few decades, so a simple increase in portion sizes and overall food ingestion should surely be a prime suspect in the increase levels of obesity described. It’s likely also that high fat foods are also potentially as ‘addictive’ as sugars / carbohydrates are, if they are indeed such, and folk may also be as likely to be addicted to eating per se, rather than specifically addicted to one food type of the food they eat. The causes of an increase in appetite and the sensation of hunger is an incredibly complex field – a hundred years ago it was apparently suggested that when the walls of an empty stomach rub against each other, it causes the sensation of hunger to be stimulated. But, we have more understanding now of these processes (though still a lot to learn), and the signals controlling hunger are incredibly complex, including hormone signallers arising from the gut (such as leptin and ghrelin) that go up to the brain (principally the hypothalamus) and which induce eating focussed behaviour and activity, and these are responsive to a wide variety of food types ingested. But even suggesting that one type of food and addiction to it is the cause of obesity is manifestly absurd, given how many other reasons could be suggested to be involved in eating patterns and food choices – for example the social aspect to eating food, the community habits of different populations of folk associated with eating patterns, and the psychological needs and issues associated with eating that go beyond simple fuel requirements and fuel dynamics, let alone genetics and innate predisposition to obesity and an obese somatotype some folk inherit from their parents. To note also that weight gain is not just related to single episodes of food ingestion, and some fantastic work from old colleagues from my time at Northumbria University, Dr Penny Rumbold, Dr Caroline Reynolds and Professor Emma Stevenson, amongst others, has shown that eating habits and weight gain are monitored and adjusted over long time periods in an incredibly complex way, by mechanisms that are not well understand, and it is in understanding these long term regulatory mechanisms that the changes in weight gain we see both in individuals and societies over time will surely be best understood, rather than ‘blaming’ one type of specific food group and its marketing to the public as a food type. As has been pointed out to me by my old (and much respected) academic ‘sparring partner’, Dr Samuele Marcora, both low carbohydrate and low fat diets can be successful in initiating weight loss – but equally, both types of diets are shown to be very difficult to maintain (as are all diets) – one so often ‘falls off’ diets because these inherent, complex food intake regulatory mechanisms are pretty ‘strong’ and perhaps difficult to change.

One of the most controversial issues is the effect of LCHF / Banting diets on either optimising or damaging health, and the jury is still very much out on this, and will be until we have big cohort long term morbidity and mortality statistics of folk on the LCHF diets for prolonged periods of time. There are a lot of studies that show that eating too many carbohydrates increases morbidity and has a negative effect on health. But there are also a lot of studies that show a high fat intake also has a negative effect on one’s health. Similarly for high caloric diets, and yet also similar increases in morbidity in diets deficient in one type of food type, or indeed, very low caloric diets. So it is also difficult to get a clear picture from scientific studies exactly what diet works or is optimal – my ‘gut feel’, to excuse the pun, would be that a prudent, balanced diet will surely offer the best alternative, though with the rider as evident from Julia’s study, that some folk will do better on a higher carbohydrate percentage diet, and some with a higher fat percentage diet. There are some other interesting confounding issues, such as what is known as the survival paradox, where folks with moderate levels of obesity do ‘better’ than their thinner counterparts in some age related disease mortality rates – particularly apparently in folks once they get over 70 years of age, when obesity may paradoxically become protective rather than pathological. A point has also been raised that there are increasing levels of people with appetite disorders and body image disorders in the last few decades too (such as anorexia nervosa, bulimia and muscle dysmorphia, amongst others), and while the genesis of these appetite related disorders is also incredibly complex, diets such as LCHF, like many other very rigidly defined diets with specific eating requirements, may be propagating the capacity for such disorders to flourish, and indeed, a number of the ‘zealots’ who ‘convert’ to such diets and stick to them ‘through thick and thin’, may have appetite related disorders and are able to ‘use’ the camouflage of sticking to a LCHF diet to ‘mask’ a latent eating disorder. I can’t comment on the veracity of this suggestion, without seeing more research on it, but my ‘gut feel’ again is that there may be something like this.

Eating patterns and dietary choices, and their relationship to health, are surely some of the most complex and multifactorial areas of research that there can ever be in science. Because of this it is so hard to find and do good science that can give a clear indication of the ‘best’ diet or eating pattern for any one person, and most science in the field concentrates on one food type or one outcome of specific food type ingestion, and makes conclusions based on their results that are well intentioned, but always succumb to the problem of the complexity of the human and social dynamics associated with what and how much folk eat, that is perhaps impossible ever to reduce to a single laboratory or even field based experimental protocols. Because of this (and the fact that people need to eat on a daily basis to survive, so in effect everyone is a ‘captive audience’ for and to information), it is a field which is susceptible to anyone ‘getting up on a soap-box’ and putting their ‘five cents’ into the debate, and with modern communication methods available to us like blogs and the social media channels currently available, these opinions can spread rapidly and be taken as ‘gospel’ in a very short period of time. When someone, whom I respected so much as Tim Noakes, and with whom I have published so prodigiously together as a co-author in the past (though not in the field of LCHF / Banting diets), starts ‘banging off’ with tweets such as the above about the future of Europe potentially being determined by folk eating a LCHF diet or not (part of me is sure that Tim, if he did write this, perhaps did so in jest, or it was written as a ‘spoof’, as it is such a ‘left field’ post), I do wonder whether the field of nutrition, and those interested in it has become something of a ‘mad hatters tea party’ (though of course I have great respect for the large majority of my nutritionist colleagues). Surely like all diets, the LCHF / Banting diet will fade away as people find it hard to stick to it, as a new diet fad is announced and takes its place, and as science ‘chips’ away at some of the astonishing claims for its veracity made by its proponents. Surely in the end a balanced diet, like a balanced anything, will ultimately prevail as the diet ‘champion’. Until then, March Hare or Mad Hatter, whoever of you is pouring the tea, can I please have two spoons of sugar in my tea. If having such prevents me from ruling Europe, or dominating the world, so be it!

%d bloggers like this: