Auto-Immune Disorders And The Sense of Self – Why Do We Sometimes Attack And Damage Ourselves When We Have So Many External Enemies That Should Rather Be The Focus Of Our Enmity, Violence And Rage

Seventeen years ago, my young family and I emigrated from South Africa to start a new life in the United Kingdom, and in that time, we have grown to love much about our adopted land. We now live in farmland in the North-East region, known for its striking natural beauty and history, with many waves of Viking and Scots invasions passing through the area, and Hadrian’s wall being an amazing feature that goes past a mile above our home in the hills behind where we live. We still battle with the cold and wet winters, but love spring and summer, and the soft warmth, flowers, and green hills and fields which it brings. Sadly, for me the beauty of spring and summer have a real downside, in that I suffer from bad hay fever and pollen allergies and spend a lot of the beautiful days with eyes watering (as they are now while writing this) and red and swollen, sneezing, and needing to reach for tissues to blow my nose throughout the day. I remember my father suffered the same thing in South Africa when we were growing up, and I had it to a slight degree in South Africa too, but it has been far worse than since we moved to the UK, and sadly does not seem to be improving much as the years pass by, as I hoped it might. I also am allergic to bees and swell up when stung, though thankfully had not had such a misfortune for a few years. Clinicians reading this would quickly diagnose me as having a hypersensitive immune system, and I am sure there are quite a few out there that suffer similarly to me from such allergies on a seasonal basis. A cousin in the family has suffered from ulcerative colitis since adolescence, an inflammatory bowel disorder, whose basis is also an auto-immune dysfunction, and he has had it for many years, with it chronically affecting his lifestyle, choice of foods, and what he can do as part of his daily life, since it first flared up. My wonderful wife Kate, who has been my life-partner for more than twenty years now, also has had issues with immune system dysfunction, though of a far greater order of magnitude than my issue with allergies, and perhaps even ulcerative colitis. When she was fourteen, she developed symptoms of a fever and malaise, and initially her parents were not concerned and thought she was simply getting an infection. But she felt worse, and was taken to the local hospital, and it was noted she had no pulses in her legs, and after a battery of tests, was diagnosed with Takayasu’s arteritis, an auto-immune disorder whereby immune system related inflammation destroys the aorta, the main artery emanating from the heart, and potentially the pulmonary artery too. She was treated first with anti-immune function drugs and eventually needed a stent replacement of her entire abdominal aorta (basically the insertion of a plastic tube in place of the aorta, which was too damaged to continue functioning), and fortunately survived the ordeal and disorder, and is well and happy today. The immune system is supposed to be protective, yet here are three examples, with many others, where our own immune system appears to have ‘turned against’ ourselves, and caused physical harm, instead of protecting us folk from external threats. In this article thus we will examine the immune system and attempt to understand how it can go so badly wrong.

As described above, the immune system is an incredibly complex network of interconnecting biological molecular systems which protects each of us from external pathogens that potentially can harm us and cause disease. The first line of protection we have is the physical barriers that exist between our bodies and the environment – skin cells on the outside, and protective layers of cells in all the bodies internal structures which come into contact with the external environment, such as the respiratory system, gastrointestinal system, and reproductive and urinary excretion systems. If this protective lining is breached, the immune system then springs into action. Firstly the innate (generic) immune system is activated by an array of signalling molecules, and white blood cells principally are released which both destroy the invading pathogen, be it a virus, bacteria, foreign body or cancer cell, and stimulate a general body response which helps to remove and repair the damaged tissue and destroyed invading pathogen, such as increasing body temperature, and increasing blood flow to the damaged area, which causes the redness and swelling in the area around the damaged tissue, and is a sign of the body trying to first destroy then heal the damaged / invaded area. There is no cellular ‘memory’ to this response, and it occurs similarly to each attempt a pathogen makes to invade, damage, or devour us. A second ‘adaptive’ immune response occurs, after a lag period of time after the initial infection and response, during which, via very complex cellular and blood born signalling mechanisms, a ‘learned response’ to the specific virus, or bacteria, or whatever has damaged and / or entered the body occurs, which further targets the specific pathogen, but perhaps more importantly, is maintained as an ‘immunological memory’, so if the same pathogen ‘attacks’ us again at a later stage, it is remembered and a far faster specific response occurs to the infecting agent, and the period of ‘illness’ and inflammation associated with the ‘pathogen killing process’ is less. The immune system has become a very hot topic for research, and more is being found out about its incredibly complex pathways and methods of action, and it is perhaps beyond the scope of this article to describe each of these processes and pathways, some of which have magnificent names such as natural killer cells, macrophages, neutrophils, and others suchlike, which recognise their ‘foes’ using ‘pattern recognition receptors’ and ‘toll-like’ receptors, which recognise ‘pathogen-associated molecular patterns’ and ‘damage-associated molecular patterns’ and tailor-make specific responses which both destroy the infective agents and leave an ‘immunological memory’ using methods that are still being worked out and are currently not completely understood, and involve the development of antibodies associated with ‘multiple histocompatibility complex molecules’ against ‘antigens’ (pieces of the external agent which cause an immune response) expressed by the invading pathogens, which when complete travel around the body in the blood plasma or lymph systems and bind to pathogens expressing these ‘antigens’ and mark them for destruction by white blood cell structures or other immune system generated molecules.

In an ideal world, if this superb protective system did its job perfectly, the immune system would destroy all external agents, first using the generic response, and then using the second wave of more specific, targeted response which is ‘remembered’, so that if the same external agent ‘attacks’ us again, there would a quicker and more focussed response, which destroys the external agent – bacteria, virus, or whatever, in a speedy manner and with less general immune response. However, unfortunately the system is prone to errors, from a hypersensitivity response where the immune system response is so severe as to damage the body systems or organs, and is not attenuated with time, creating an allergy or allergic reaction to some external agent, be it pollen, bee-stings, or peanut allergy (which can be very severe), to a horrendous situation where the immune system stops recognising the ‘self’ as opposed to the ‘non-self’ / external agents, and ‘attacks’ its own body systems and organs, for reasons that are still almost completely unknown to us. There are also in some folk with genetic disorders, an absence of immune function maturation, leaving the individual with no possible protective response to any external agent or infection, or a situation where external agents actively target the immune system, such as HIV infection, and ‘wear out’ the immune system, leaving it again predisposed to a lack of ability to have any immune response and thus chronic infections (and, interestingly, unusual types of cancer generation), but these hypo-immune challenges are outside the scope of this article.

Hypersensitivity reactions are divided into different types according to which components of the immune system generates the response, and the speed of response. A very fast ‘anaphylactic’ immune response to infective agents or food types – think peanut allergies or acute asthma in which there is extreme reaction of the respiratory tract to external triggers – is usually associated with more ‘humoral’ (blood borne) immune responses such as degranulation of mast cells and basophils and the release of IgE into the blood, causing vasoconstriction, bronchoconstriction and thus difficulty breathing which can lead to death if not treated quickly. A longer type of hypersensitivity reaction – such as contact dermatitis where there are red, inflamed reactions or ‘wheals’ in the skin as a result of touching an unusual substance that causes a hypersensitivity response is caused by T cells, monocytes and macrophages, and takes a few days to evolve and then subside. These extreme hypersensitivity immune responses need prompt treatment with immunosuppressive and anti-inflammatory drugs, either topical, inhaled, oral or intravenous, depending on the seriousness of the hyper-immune response, and are not to be taken ‘lightly’ by folks who witness someone having such a hyper-immune / anaphylactic response, with prompt emergency management and hospitalization often being needed, and one should rather err on the side of caution when witnessing or treating such patients and sufferers.

The most challenging ‘error’ response of the immune system is autoimmunity, when the immune system fails to recognise ‘self’ from ‘non-self’ and attacks its own cells and organs. As described above, there is an incredibly complex system of recognition and destruction of external agents or infections which threaten the body and a human’s existence (all creatures have a similar immune system and responses). How or why this happens is unclear, and normally in the bone marrow and thymus, young lymphocytes are presented with ‘self-antigens’ so that they recognise ‘self’ from ‘non-self’ and eliminate cells which appear to no longer recognise ‘self’ cells as such. For some reason, perhaps genetic, perhaps environmental factors, perhaps related to toxins or specific infective agents which ‘hijack’ the self-recognizing system, the immune system ‘turns against’ itself and starts destroying its own cellular structures, organs, or body parts. In the late 19th century Paul Ehrlich recognized this ‘self’ destructive capacity of the immune system and perhaps appropriately proposed to call it ‘horror autotoxicus’. While the initial symptoms for most of these autoimmune disorders are the generic ones of the immune system responding to an external target, such as low grade fever, fatigue, malaise (generally feeling unwell), muscle aches and joint pain and skin rashes, the specific autoimmune disorder being ‘created’ by the attack on ‘self’ structures gradually become apparent, such as the joints in the hands in Rheumatoid Arthritis, to the digestive system wall being damaged in Crohn’s disease, to the vascular system in Takayasu’s Arteritis. An astonishing array of disorders are linked to this lack identity of the ‘self’ by autoimmune system, including diabetes mellitus type 1 disease (damage to the insulin-secreting beta cells of the pancreas), systemic lupus erythematosus (systemic damage to many body parts, including joints and soft tissue), multiple sclerosis (nerve damage), alopecia areata (general body hair loss), psoriasis (dry, itchy skin disorder), Graves’ disease (predominantly thyroid gland disorder), myocarditis (heart muscle damage), Addison’s disease (predominantly adrenal gland disorders), aplastic anaemia (failure to make new blood cells), and many others. The severity of symptoms and longevity of these disorders vary, with their re-appearance being called a ‘flare-up’, and can result in mild to severe damage to organ tissue and function, with no obvious pattern to these variations and all usually with significant morbidity to those suffering from them (it has been suggested that up to 7% of the human population suffers from either a hypersensitivity or autoimmune disorder). Diagnosis is related to the specific symptoms related to each of these pathologies described above, as well as non-specific anti-inflammatory markers of infection, or cellular damage. Treatment is either generic, such as rest, vitamin D ingestion (some autoimmune disorders such as multiple sclerosis appear to have a relationship to vitamin D deficiency), physical therapy of affected joints or organs involved, surgical treatment where possible, non-steroidal anti-inflammatories to reduce inflammation along with glucocorticoids and anti-rheumatic drugs, and new immune system targeting drugs, though of course all of these, if used for too long or at too higher doses, cause side-effects which can be challenging themself. Specialist care is needed, as well as psychological support, as often these autoimmune disorders can last a long time, or are terminal, and are often extremely debilitating to the sufferers.

If there is an upside to the autoimmune disorders, or to our increasing knowledge of the immune system and its function, it has been in the adjuvant use of its function for treatments of other pathologies and disorders. For example, the antigen-antibody recognition functions and pathways have been the basis of the development of vaccines, where components (antigens) of for example a dangerous virus such as the polio virus are given to individuals in order to create a mild response to the vaccine, which thus creates a ‘memory’ of the virus which is / would be protective for the person if they were actually infected by a ‘live’ virus later in time. Furthermore, it has also been shown that there are strong links between the function of the immune system and cancer development and treatment, and some treatments aimed at the immune systems have shown to have positive effects in some cancers, and a lot of research work is examining the relationship between changes in recognition patterns and pathways in the immune system and the development of cancer, which in many ways can be thought of ‘self’ cells becoming ‘non-self’ cells that then actively work to damage ‘self’ cells. While there are a lot of possible areas for future breakthroughs in the treatment of cancer, this is currently thought to be a fertile one, and laboratories around the world are working at the relationship between the immune system and cancer development and propagation.

While each of the different autoimmune disorders have their own specific symptoms and physical damage associated with each (and why this is the case, is interesting in itself), where there is cause for hope, is that many of them ‘wax and wane’, some appear for a short period of time, or once or twice then do not appear again (such as multiple sclerosis, but of course, this disorder can also progress quickly and become severe). Furthermore, with contemporary treatment, many of the autoimmune disorders can be well controlled, and a small but significant percentage of them go into remission. Folks suffering from most autoimmune disorders can live as long a life as healthy folk, if taking the required treatment of their specific disorder, although some of the disorders, such as rheumatoid arthritis, cause folk to die ‘on average’ a few years earlier than healthy folk. While these autoimmune disorders can be severely debilitating, clinicians often need to work in the business of hope, and folks suffering from many of these autoimmune disorders can be counselled that they will in all likelihood be able to live a long life, and may go into remission, and should never forget this in the tough days when the symptoms of their specific autoimmune disorder are challenging to them, and they worry about their future.

Understanding what ‘is’ and what ‘is not’ part of us, however, becomes even more problematic given that the human body hosts millions of microbes which – bacteria, fungi, viruses – called the human microbiome, which live ‘in harmony’ with the human being, are often suggested to be needed for synergistic function in the gut for example, and whose presence does not initiate an immune response. It has been suggested there are perhaps more microbes in the human tissues and biofluids than cells – in the gut, respiratory tract, oral cavity, uterus and vagina, skin and other places. Why these do not result in an immune response, while auto-immune response to one’s own body happens as described above, creates another order of mystery to both the function of the immune system and the pathophysiology of how and why it goes wrong when it attacks its own cells.

Who are we, and who are we not, and how we recognise what is, and what is not, seems to be the basic issue inherent in both the successful functioning of the immune system, and the development of autoimmune disorders, where the immune system ‘turns against’ itself and destroys its own systems and structures, in a way that is usually devastating to those individual who are unfortunate to suffer from its deprivations. The immune system, with its capacity to recognise ‘self’ from ‘non-self’, and to turn against its the living organism / human of which it is part, makes it one of the most fascinating human conditions to research and attempt to understand, but is surely one of the most horrendous types of diseases one can suffer from, as Ehrlich was aware of when he described the disorder as ‘horror autotoxicus’. In cancer cells ‘go rogue’ and turn against the body of which it was once past, but in autoimmune system diseases, the body’s natural defensive mechanism itself turns against its own body and self. It is astonishing that a system so needed to keep the human safe from foreign pathogens, can itself be so harmful and toxic to its own self, and that this happens in a fairly large percentage of people, usually with extremely debilitating symptoms and negative effects on those affected life and lifestyle. I know that I am me. You know you are you. Why sometimes in some folk who me and you are gets forgotten is one of life’s biggest mysteries, and one of medical researchers’ biggest challenges to solve. Crippled and bent hands. Loss of insulin function. Arteries damaged. Nerves damaged. Heart damaged. Gut function impaired with chronic bowel abnormalities. Horror autotoxicus. Horror, caused by our body’s basic protective mechanisms going rogue and mutineering against themselves. Self-destructive perfidy, of seemingly the most foolish type possible, and for no obvious rhyme or reason.


Are Carbs King – Nutritional Supplementation For Athletic Performance – If They Worked So Well, Wouldn’t They Be Banned

In my late teenage and early twenties, I did a lot of endurance racing, mostly in kayaks, but also running, cycling and triathlon races. For several years I used to race each weekend, and sometimes do a running race in the early morning, and then rush off to do a kayak race in the late morning. University work was fitted around my training, rather than vice versa, and I can’t deny I was completely absorbed by sport and how to improve quickly and try to eventually win races. One of the major things I focussed on was diet and nutrition, and read many sport and nutritional magazines, which were awash with adverts and articles espousing ‘carbs as king’ (carbohydrates – that which constitutes bread, pasta, rice etc) for optimal performance, as well as a host of vitamins and nutraceuticals (products derived from food types that are used for medicinal purposes) which apparently if I ingested them would improve my performance – a long list including creatine, ginseng, carnitine, carnosine, berries, beetroot, and of course every vitamin available, amongst many, many others. When I completed my medical training, and after a spell working as a medical doctor, I worked for more than a decade in the University of Cape Town Research Unit of Sport Science and Exercise Medicine, where I spent a wonderful period studying and eventually becoming well versed in the area of general control systems in the body and brain, an area of research I perhaps found so interesting given the profound lack of control I influenced over much of my own life in my youth, except perhaps in the sporting arena. While not a key focal area of interest, because back then (in the 1990s and early 2000s) only clinical doctors could perform muscle biopsies, I was asked to join research teams investigating the effect of nutritional interventions on performance, led by now world-renowned experts such as John Hawley and Louise Burke, Andy Bosch, Julia Goedecke, Vicky Lambert, Laurie Rauch and many others, and we would spend many hours in our research labs pushing cyclists and runners to exhaustion on indoor bike and running ergometers, while taking many measurement of their physiological systems such as oxygen intake, heart rate, blood glucose and lactate concentrations, and muscle glycogen levels pre, during and post trials, which required the said muscle biopsies I was there to perform on the athletes as they got progressively more exhausted, as well as feeding them either high carbohydrate drinks or other purported ergogenic aids to assess the effect of these on performance. In this article we will look as some of our findings, as well as those of many other similar trials performed across the world, to assess whether ‘carbs are king’ is a truism, and whether the supplements out there which are suggested by their suppliers to be ‘game changers’ really are such.

For many years, particularly in the 1980’s, most studies did indeed show that ‘carbs are king’ and just about every study done, particularly those examining long distance endurance events, but also those examining high intensity events beyond short sprint distance, appeared to show a positive correlation between performance and carbohydrate ingestion during the event, and / or pre-loading with carbohydrates to ensure muscle glycogen was as high as possible prior to the event. My lab boss at the University of Cape Town at that time was the internationally acclaimed sport scientist Professor Tim Noakes, who very much believed in the ‘carbs are king’ mantra back then, had started a company with some successful athletes that marketed carbohydrate replenishment drinks, devoted a chapter in his excellent book ‘Lore of Running’ to pay homage to the ‘carbs as king’ mantra, and when lecturing on the subject, if I remember it correctly, described how a top level athlete was winning a local 90 km footrace, when he became hypoglycaemic / extremely fatigued, and slowed down so much that eventually he was walking, and for a period may have even laid down on the road. He apparently never drank synthetic drinks, but as the story goes, his support team gave him a full two-litre bottle of coca cola, which has very high levels of carbs in them, and after drinking most of this, he soon felt much better, got up, and sped off again, and managed to come second. After several decades of punting the ‘carbs as king’ mantra, most folks in the sports science world will remember that Tim had an epiphany / Damascus moment, apparently due to finding he had high blood sugar and potentially pre-diabetes, that carbs were definitely not ‘king’, and in his mind they quickly become the spawn of the devil, and he advised folks to tear the chapter eulogising carbs out of his book, and began advocating, very loudly, and to all who would listen, that actually ‘fats were king’, and that everyone would benefit from a high fat, low carb diet, and that eating such a high fat, low carb diet was not just good for health, but would improve one’s sporting performance, once one had learnt to tolerate it, and had followed the diet for a few months. The efficacy of his claims are yet to be validated, but his volte-face has bought the nutrition world back into the spotlight, with Tim building up a group of fanatical disciples to his new theory, and other senior researchers in the field dismissing his new ideas, and maintaining belief in the ‘carbs are king’ mantra, and there are many new studies on the go currently attempting to once again prove or disprove the ‘carbs are king’ mantra, and equally as many studies trying to prove the ‘fats are king’ mantra, all around the world,

While such an extreme ‘Damascus moment’ change in perception is unusual, there are reasons to question the ‘carbs are king’ mantra. Trent Stellingwerff and colleague did a systematic review of 61 studies examining the effect of carbs on performance enhancement and found that 82% of these studies had shown performance benefits. However, this means that 18% of the studies had shown no effect, and this is of concern, if one believes ‘carbs are king’, as theoretically for it to be such a ‘winner’, it should work every time and in every study. Equally, given that most negative studies don’t even get sent in for publication, one can assume a higher number of negative studies occurred and were not reported. Studies have shown that ingesting carbs within an hour before performance can paradoxically attenuate performance, and also that too highly concentrated carb mixes ingested during athletic event cause gut and bowel disturbances which, lets just say, don’t allow good performance to occur, other than toilet requirement and output. Most importantly, its always a challenge to adequately control for the placebo effect in performance trials – the placebo effect being an improvement in performance when one ingests something that one believes will work, even if it is not the actual thing one believes one is ingesting, meaning it is one’s belief in its efficacy, rather than efficacy itself, which causes performance benefits. In an interesting study performed by Virginia Clark, Will Hopkins, John Hawley and Louise Burke in the University of Cape Town laboratories, athlete performances were improved by 4% when participants took a placebo drink but were told it was a carb supplement, as compared to a 1% reduction in performance when told that the placebo was a placebo. So while in both trials they were actually getting placebos, when told the placebo was carbohydrate rich, their performance improved to a degree often found in positive outcome ‘carbs are king’ trials. In another interesting trial performed by Amanda Claassen, Vicky Lambert, Andy Bosch, Ian Roger and Tim Noakes, cyclists who first were fed a low carb diet, then had a glucose infusion during a performance cycling trial which maintained their blood glucose levels within normal concentrations for the duration of the trial, had a massive 26% variation in performance, which indicated that the response to carb infusion varies between different folks. Unfortunately for those subscribing to the ‘fats are king’ mantra, the results are even worse than the ‘carbs are king’ trials, with most studies showing that a low carb, high fat diet had no positive effect on performance, and some even showing a negative effect on performance. Furthermore, in those studies where subjects ingested high fat supplements during endurance performance mostly had significant gut complaints which affected their performance negatively.

So, sadly, things are perhaps not as ‘rosy’ for the ‘carbs are king’ mantra believers, and ditto for the ‘fats are king’ disciples. The same issue arises for just all supplements that have come onto the market since I entered the field as a researcher nearly thirty years ago. Creatine for a while was the next ‘king’ supplement, as were various vitamin supplements, cherry drinks, and a whole lot of other punted ‘king’ nutritional aids – but after a slew of positive studies showing the virtue of each of these, further studies showed more equivocal or negative performance findings, leaving the jury out on most of them. A real challenge for research in the field is to normalise diets which folks who participate in such trials eat to be sure it is not a change in some other food type they have ingested, that may be the cause of the performance change, rather than what is being tested. One’s performance also appears to have a degree of variability from day to day – think on your own life, every day one wakes up ‘feeling different’, some days like one can jump over high buildings, and others when getting out of bed is a struggle, and it is difficult to be sure that a study participant ‘feels’ the same each day of testing. Thus, most folk will always take all new ‘king’ supplement or nutritional aid with a pinch of salt until several years of different studies, in different laboratories around the world, have all showed similar findings, in placebo controlled, well performed trials. As one of the absolute mavens in the nutrition research world, Ron Maughan, so aptly puts it, if any nutritional or nutraceutical product was found to be so good it always enhanced performance, it surely would be banned as a doping agent by athlete sporting governing bodies. Given the amount of scepticism in the text above, what would this author be certain that does work as an ergogenic aid? Sadly, from what I have seen, the only things which clearly benefit athletic performance have already been banned, namely things like anabolic steroids (for shorter distance sprint events and recovery) erythropoietin (for enhancing red blood cell capacity and thus for endurance events) and other completely non-nutritional sport ‘supplements’ such as painkillers, asthmatic inhalants etc, which, as I have already said, are banned, so of little use to anyone except those so desperate for success, that they are willing to risk losing everything – sport career, reputation, standing in the community – everything for the chance of winning which these banned drugs surely will give them.

If banned substances would not be your first choice, what then would be a good ‘ergogenic’ aid or nutritional choice to enhance sporting performance. A really interesting study was performed by Julia Goedecke and colleagues, again in the University of Cape Town laboratories, where using clever techniques they examined what fuel types different folks ‘burn’ during both rest and activity. Perhaps not surprisingly, but uniquely, Julia and colleagues found that there was an astonishingly wide variation in what fuel folks used at rest and during exercise, with a parabolic distribution of ‘carb-burners’ on one side and ‘fat-burners’ on the other, and the majority of folk using a mix of carbs and fats as fuels both at rest and during exercise. In other words, some folks may perform better on carbs, others on fats, but most folks better on a diet which has some of both. And, one could perhaps hypothesize, that if one was a fat burner, and ate pure carbs for an athletic event, one may ‘struggle’ and perform worse during the athletic event as compared to if one ate a diet higher in fats, if that is what one preferentially burns. And vice vera if one is a carb burner, and eats a high fat diet – one may perform worse on such a diet compared to if one fuelled oneself on carbs. Therefore, the point being made is that perhaps one needs to find out what nutritional types are good for you, and you alone, and eat and drink those during a race in order to optimize one’s performance. This study may also explain why there is so much variation in the outcome in ‘carbs are king’ trials performed earlier – participants in trials are probably a variety of fat, carb and ‘mixed’ carb/fat burners, and thus have a different performance related response. A clear example of this happened to me during my racing days of more than thirty years ago, where in a three-day kayaking event, prior to the first day I ate as I had read was best for performance in the ‘carbs are king’ era back then, namely a plate of pasta with bread and carbo supplementation bars for dinner, and carbs for breakfast too. I had the most terrible day from a racing performance perspective and felt like I could never get out of ‘first gear’, and felt paradoxically hypoglycaemic at the end, being dizzy, pale and almost not being able to stand at the end, and it was only due to my kayaking partner having a good day that we remained competitive. I had a post-race craving for hamburgers (high fat), and that afternoon and evening consumed six or seven cheeseburgers (astonishing as it may seem from a quantity perspective), and was probably still digesting them the next day, so had nothing before the second day race start. I have never had prior or after that day such a good day, I felt I was invincible and could go at top speed through the entire days race, and we did, and we had one of our best performances ever. Though I did not realise it at the time, and kept trying to ingest carbs before races given that that was the central dogma back then, it became clear to me that I was more of a fat burner than carb burner, though perhaps more in the middle than a complete fat burner, and I often wonder if I had adjusted my diet for all my future races back then, if I would have been more competitive for the rest of my racing career. On the other extreme, my colleague back then, Andy Bosch, lived on carbs day and night, and had a hugely successful footrace career existing almost entirely on carbs both as part of his daily life and racing routine, so he clearly would be someone who sits more on the carb burner side of things. It is perhaps up to each person to work this out for themselves, and I would like to see more studies about what athletes actually eat before and during events, than what we as sport scientist prescribe to them, and perhaps we may be surprised at what we find – a normal diet is king for most folk, unless one is an extreme fat or carbo burner.

Finally, a point must be made which resulted from one of the most amusing, yet strangest email conversations I have had recently with some of the world’s leading sport scientists as we were working on a manuscript on exercise regulation and ‘freewheeling’ ideas about what factors most enhanced performance. In a somewhat embarrassed state, I mentioned the success of my post-cheeseburger race as described above, then also described my other best performance ever (both perceptual and perhaps results wise too, though it was not in a top echelon race), which happened after after an all-night celebration for the 21st birthday of one of my old kayaking partners, where I had dozens of beers and tequilas (something of which I am not proud) and started the race with no sleep, no breakfast, and still perhaps a big drunk. I had an absolute blinder, felt untouchable and stayed with a front bunch consisting of double kayaks while I was paddling a singles kayak, and very nearly beat them in the final sprint. I felt awful afterwards, but apart from the ‘cheeseburger’ day, never felt or did better when racing, which was and is astonishing to me given the toxic, supposedly physiologically impaired state I was in. As it goes, one of the other illustrious sport scientists involved in the discussion then described that in a similar state to me, he had run a personal best for a 1500 m race (and he was a national level athlete), and remembered that a running peer of his, who shall remain unnamed, set a British record for the half marathon (around 61 minutes) after drinking 13 ciders at his cousins wedding the evening before. All of this is not described to advocate alcohol as an ergogenic aid, but to make the point that one does not necessarily have to be in optimal nutritional state to perform at top level, though it is a moot point whether these successful sporting episodes while in an apparently sub-optimal state, intoxicated and lacking sleep, had a psychological or physiological basis. As stated in the paragraph above, I would welcome studies of exactly what top athletes, and indeed all athletes, do, prior to setting a personal best in their sporting careers from a nutritional and social perspective – the findings may potentially not be what sport scientists have become used to expect as ‘optimal’ based on their well-controlled laboratory studies.

The field of sport nutrition is a complex one, and also a highly lucrative one, with industrial companies making billions of dollars selling carb drinks, or the latest dietary supplement craze, or these days high fat bars and replenishment fuels. To me the worry is that research appears to be going round in circles, with carbs being punted as king for many years, then fats were king, now carbs seem to be becoming king again. Millions and millions of pounds, and researcher hours, have gone into testing and advocating each of these as ‘king’ food type from an ergogenic perspective, and yet, we seem to be no further to any fundamental agreement on which is better, whether it be in basic food types, vitamins or supplements. A reason may be, as described above, that each human is different, and there are ‘different strokes (kings) for different folks’. Daily differences in how folks ‘feel’ and daily performance variation also needs to be taken into account too. We still don’t seem to have the hang of what folk really eat before, during and after events, and I can’t get out of my mind seeing folks like Tyson Fury eating a cheeseburger immediately after a boxing match as his dietary favourite of choice when he no longer has to worry about weight as a factor (an issue which we have not addressed in this article, and is for another one), and wonder if he would perform even better if he ate the same cheeseburger before his big fights too. What I am sure is that in the next couple of years we will have a new supplement come out that everyone will rave about, and be used by all athletes as the new ‘king of ergogenic aids’, and early studies will prove it to be so, to much acclaim in the sport science and wider athlete world, then slowly more and more studies will report zero effect of the new ‘king’, and it will eventually become another shelved product, that is still advocated to be used, just in case it works as an additive effect, even if the balance of opinion becomes non-committal at best. I would love to put my savings as an investment into the next big ergogenic ‘king’, given how much income those that generate these new products make, but I am always cautioned by the words of the very wise Ron Maughan – that if it really did work, it would be banned. For now, I guess we will have to keep the jury out on all things nutrition and supplements for the competitive athlete world, and watch as the current confusion continues, researchers keep on testing essentially the same things, decade after decade, and old gurus still flip flop 180 degrees on whether carbs, or fats, are king. Pass me a cheeseburger and a dozen beers, please!


Living Completely Paralysed In An Old Fashioned Heavy Diving Helmet With A Mind Fluttering Like A Butterfly That Cannot Get Out – Locked-In Syndrome – The Horror, The Horror

No matter how we try or want to, no-one gets out of this world alive. Sadly, at some point in the future, we are all going to die. Having lost religion as a crutch and buttress against the fear of dying in my early twenties after reading too much of Sartre’s and Camus’s theories of Existentialism, and its ‘twin’ philosophy Nihilism, I can’t deny that thinking about dying is never comforting for me. Paradoxically perhaps, but for obvious reasons, the older I get I try to think about dying as least possible compared to how frequently I thought about it when an adolescent or in my early twenties, when I was a medical student and being introduced to the numerous ways of dying as part of my studies. Having said that, I have often pondered on what the best and worse ways to die would be if I had a choice, and being a medical doctor, have often been asked this by friends and relatives in discussions about life and death, usually after a few gins and tonic when things become earnest and thoughts ‘deep’. None are particularly pleasant, but after a lifetime in medicine and science, I have developed ‘preferences’ and ‘fears’ of what my fate will be when it is time for me to shuffle off this mortal coil. In my twenties I nearly drowned after capsizing out of my kayak in a big, rocky river and being pinned under a rock for a prolonged period of time. Six years ago, I suffered a heart attack which, while not being severe, made me drift out of consciousness with severe chest pain, feeling as if ‘this was it’, and only becoming completely conscious again in the ambulance which was taking me to the local hospital after stabilizing me. To be honest while neither of these were pleasant, they did not last long (the time from realising I was facing potential death to becoming unconscious), and while I felt the ‘agony’ of leaving behind my family and life I knew as my vision went grey and then black, compared to other more prolonged, more chronically painful ways of dying, these would actually be my preferable ways of dying. Of all the horrendous ways I have watched folk dying, from one hundred percent third degree burns, to inoperable cancer in multiple organs, to heart failure and ‘drowning’ in one’s own lung fluid, to retching up blood from oesophageal varices which result from liver failure, to literally ‘shitting’ oneself to death from dysentery of unknown cause (and all of these are before we talk about being tortured to death, or eaten alive by a predator), by far the worst for me are the deaths associated with a group of neurological disorders which cause one’s body to fail while one’s mind stays intact, and in particular, ‘locked-in’ syndrome, which for me would for me be the state of existence most hateful to live in, and the worst way to die.

Locked-in syndrome is a neurological disorder caused usually be a blockage in the small arteries in the ‘brainstem’ region (specifically blockage of the basilar artery which supplies the pons region of the brainstem for those interested in details) of the brain, which links the brain in the skull to the spinal cord as it goes down the spine and from which nerves go to the individual muscles and organs in the body. Through this region (which to make things easy let’s say is in the lower skull / high neck region of the body) ‘tracts’ of multiple motor nerves run down to the spinal cord, and other ‘tracts’ of multiple sensory nerves take sensory information up to the brain from the body. So, when damage to the artery supplying this region occurs, usually from an arterial blockage (stroke) or an artery bursting (haemorrhage), these regions, usually always the motor muscle supply tracts, but if large enough vessel damage the sensory tracts running up to the brain are also damaged by lack of oxygen and nutrients, and become similarly dysfunctional. This means that no muscles in the body can work, and if the sensory system is involved, no sensations can be felt after this damage to the artery happens. But, and this is the crucial but, your brain cognitive function (your capacity to think, observe and understand things) remains completely intact, so you remain aware of what has happened to you, and aware also that you are in the horrifying state of literally not being able to move a muscle, except in some folks who are able to blink, or move their eyes up and down, when these movement parts are spared damage, as often seems to happen with these patients. The horror of this state becomes worse as often this state is not picked up for weeks or months, and doctors misdiagnose locked-in syndrome patients as being in a coma, and do not realise they are ‘awake’ and ‘aware’ until they pick up an eye flicker, or movement of the eye, from the person lying in the hospital bed not moving, or speaking, or responding to commands in any obvious manner. I can imagine no more helpless state to be in, and have occasionally dreamed of being in such a state, and as must have happened to some (or indeed many) of such folk in the past, hearing doctors, nurses and relatives discussing the merits of keeping me alive (all such patients have to be ventilated as they cannot breathe on their own) or ‘pulling the plug’ on the ventilator, and letting me die, without being able to ‘tell them’ I am alive and can hear everything they say. The final horror of this condition is that sadly it has a very bad prognosis, with only a very small number of patients showing some improvement, and most folks not improving at all from their original state and having to be maintained alive in such a state, being artificially fed and kept breathing by hospital care folks, until they die after a few years of pneumonia or other infections. Even when surrounding folk understand that the person is ‘alive’, and are aware of their surroundings and capable of thought, it is a terrible way to ‘live’, and a number of folks with locked-in syndrome, when realizing such a life will be ‘forever’, ask for the ventilators to be switched off, and them to be able to die, to stop the horror of their condition, even though sadly it is illegal for clinicians or relatives to do this in most countries. To note there are other causes of locked-in syndrome such as poisoning, either from bites from venomous snakes, or from folks with murderous intent using curare as a poison, or medication overdose. There is also some symptoms overlap with other almost as nasty neurological disorders such as Guillain-Barre syndrome, which occurs as an occasional sequelae of a viral infection, where there is progressive loss of muscle function, although in Guillain-Barre syndrome most folk recover over a period of time, so clinicians have to be aware of this pathogenic differential diagnosis.

Having described what to me is the infernal horror of this neurological disorder, it must be said that sufferers of the syndrome have shown immense courage, and doing astonishing things, despite the tragedy of their condition. While fortunately being a rare condition, it was brought into the spotlight and media prominence by the bravery and resourcefulness of Jean-Dominique Bauby, who was Editor in Chief of the French magazine Elle, and who in 1995 suffered a basilar artery stroke when driving home one evening, and remained in a deep coma for 20 days before ‘awakening’ and finding he was only able to blink his eyes, but was otherwise completely paralysed. Fortunately, this capacity to blink was noticed by his attending health team, who worked out a way to communicate with him via a code related to his ability to blink (eventually his right eyelids had to be sown closed as his right eye suffered from dryness that caused chronic irritation, so he only could blink with his left eye). Over a period of time, he worked out using patterns of blinking how to write text via a human ‘blink translator’, and astonishingly, over the next two years, wrote an entire book, the Diving Bell and the Butterfly, which documented his condition and what it was like to live with it. The book apparently sold 25 000 copies on the first day of its release, and reached sales of 150 000 within a week, and then subsequently sold well worldwide, bringing attention to the condition and the plight of those suffering from it. Sadly, for Jean-Dominique Bauby, he died two days after it was published, and did not get to see the major positive difference it made. I hope if there is an afterlife, from that place if it exists, he was and is able to see the amount of good his huge courage and efforts to communicate enough text to fill a book did in educating both medical and lay folks about the syndrome, and what it was like to live with it. I remember reading the book with a combination of both astonishment and horror, astonishment at the man’s bravery, and horror at the syndrome itself, which, as I have written above, has made it on to the top of my list of ways I would not like to die, or experience my last few days, or months, or years of my life having to live in such a state of being ‘trapped’ in a completely unusable body, and being completely aware of this fate.

Of course, locked-in syndrome is my personal bête noire, the ghastliest way I believe I personally possibly could die, but I am equally sure that if I asked one hundred different folks what their worst way to die was, I would get a multitude of different answers. Each person surely has their own different fears of ways of dying, and for some folks it may be the other way round, namely losing one’s cognitive capacity while having a healthy body, or being in a plane crash where there are no survivors, or being the victim of a terror attack such as what happened in the 9/11 tragedies, or being stung to death by bees, or whatever. There are literally thousands of ways to die, none of them particularly pleasant, but sadly, as I said in the first sentence of this text, no-one gets off this earth alive. Death and a specific way of dying is something that we all will have to go through at some point in our future, and so perhaps it is better to worry not about how we will die, but rather on the concept that when it happens, we meet our deaths courageously and with an understanding that our time on earth was always limited rather than forever, as much as we would like the latter to be the case. Different cultures have different approaches to death and dying, sometimes coloured by religious beliefs, the idea of the presence or absence of an afterlife, and the way our parents and society discuss and respond to discussions of death and dying. In many ways, modern western society has created an ‘unknown’ aspect to death and dying (unless one works in the clinical world or are involved in a war or societal catastrophe) given that folks are sent off to hospitals and hospices to die, and in one’s daily life routines one never really sees much of death, and thus perhaps one’s mind ‘runs wild’ with thoughts and fears of ways of dying and death itself, as one never in contemporary times ‘see’ folk die as a regular occurrence. Equally though, perhaps if folk were able to see folks dying regularly, it would make their perceptions of dying worse, as watching the way some folks die, such as those with locked-in syndrome, or others I have described above, would perhaps lead to folks being traumatized and becoming even more afraid of what may await them in their future lives. I will say though, that when someone tells me they are not afraid of dying, I don’t really believe them, and believe it to be bravado – even the mouse watching the hawk descend on them to pick them up and eat them, or the antelope running from a predatory lion, show fear and surely don’t want to die, and it would to me go against basic ‘psychological’ behaviour that spans across the entire animal species, if not being even more ubiquitous, to face death and dying with no fear whatsoever.

To Jean-Dominique Bauby, he was the ‘butterfly’ fluttering around in a ‘diving bell’ apparatus which weighed him down and entrapped him, could not be removed, or escaped from, and in which he would flutter around in forever. But for every terrible way of dying, there is solace in knowing that the dying process cannot last forever. As much as we do not live forever, we do not spend forever dying of whatever kills us, and eventually death becomes the method of solace from each and every terrible way of dying. Eventually it all will end, whatever ‘it’ is that we ultimately will suffer from before breathing our last, and perhaps, thank goodness this is the case, even if we have to leave behind all we know and love about and in life, and even if there may be nothing which happens after death, except a vast void of unperceived nothingness. Perhaps it was the fear of dying, and how one will die, that led our ancestors to invent quaint stories of heaven and a warm, and a pleasant afterlife, to compensate for and perhaps attenuate the fear folk have of their end and how it will happen, but that’s for another blog, and is another story. I do hope that my family and friends have an easy death, such as perhaps feeling a brief chest pain while dancing the night away on their eightieth birthday and then sliding down to the floor as their last perception and feeling. I do hope whatever method has been chosen as my destiny to meet as the method of my death I will face it bravely and show courage to the end. I do hope there is a warm, soft place I will go to after I die, where I will meet up with my parents, hounds and good friends who have already passed on. But, sadly, no matter what I do, no matter how much I ‘pray’, no matter how much I don’t want it to happen, sometime in the future it will happen. And if my fate is that of Jean-Dominique Bauby, the horror. The horror. The horror.


A Broken Heart Not Of The Symbolic Variety – Sudden, Sharp, Chest Pains Often, But Not Always, Herald The Speedy End Of All You Have Been, And All You Are Going To Be

I still remember with absolute clarity the sequence of events which occurred when I had a heart attack five years ago. I woke up around midnight with a crushing pain in the centre of my chest which made me ‘feel’ that I could not breathe, along with pain down my left arm and jaw. I managed to pat my sleeping wife on her back and when she woke up, I said quietly (I could not speak louder due the pain and increasing dizziness and confusion I was feeling) that I was having a heart attack, and that she should phone my old medical colleague and good friend Chris Douie, who was working in the same area as we lived and worked in New Zealand back then. Fortunately, he answered his phone and instantly told her to phone the emergency number and get an ambulance to our house, and set off driving to come and help at the same time. Events started becoming confused and I vaguely remember my wife giving me some ibuprofens and B-blocker tablets (which I used for migraines), then Chris and the emergency first responder folk being in my room as I was sweating profusely and telling them I couldn’t breathe and that it may be old asthma I experienced as a child coming back, then next bouncing around in an ambulance on the way to hospital, and finally I ‘came round’ and started improving when I was in the Emergency Room of Waikato Hospital, with an oxygen mask on my mouth, Electrocardiogram (ECG) and IV drips on my chest and in my arm, and a kind Emergency Room Doctor taking bloods and doing an ECG trace recording. Chris told me I had had a heart attack which was obviously evident from an abnormal trace found on the ECG, but was lucky as the bloods had come back showing I had no evident damage to my heart muscle, and that I would need a stent bypass, but should be okay. I was kept in a Cardiac intensive care ward overnight being observed (which meant that I was not in acute danger as if I was they would have operated immediately), and the next afternoon I had a stent inserted into my blocked heart vessel, and I went home the day after that with a bucket full of pills to take, but fine again, and have had no heart problems since then. This incident definitely ‘focussed my mind’ on heart attacks and what causes them, and I was astonished to find how commonly occur, and also how many folks in their late forties and early fifties have them (I was fifty years old when I had mine). In the years that followed, perhaps due to my mind being more alerted to them, I read about a number of prominent internationally successful folks who died of sudden heart attacks, including both of South Africa’s World Cup winning rugby team’s wings from 1995, Chester Williams and James Small, as well as more recently the world-famous cricket player Shane Warne, amongst a host of others. When friends and associates heard about my heart attack, several came forward to say one or several of their family and friends, or indeed themselves, had similarly experienced a heart attack, and I noted during my short two day stay in the Waikato Hospital Cardiac Critical Care Unit, that the place was a bit like Piccadilly Square tube station, with folks in the midst of having heart attacks, waiting pre-operatively for heart vessel repair, or having post-surgery monitoring of their hearts coming in and out seemingly constantly from an always full ward. All this got me thinking of heart attacks and its underlying pathophysiology, ischaemic heart disease, why so many folks suffer from it, and why I was one of those folks who had for a short while been ‘knocking on heaven’s door’ due to a blockage in one of my heart vessels which almost took me away to the after-world, in such an apparently unexpected way.

A heart attack is caused by ischaemic heart disease, in which one or several blood vessels which supply the heart muscle itself becomes progressively more blocked until eventually no blood can flow past the blockage at all, meaning that no oxygen can be transported to the working muscles of the heart (hence ‘ischaemia’ – which is defined as inadequate blood flow), which if occurring for a long enough time, causes the death of the muscle tissue supplied by the specific blocked vessel. It has long been thought that these blockages are the result of the build-up of plaques which are the result of high cholesterol causing damage to the arterial wall, but contemporary thinking is that the artery wall may be damaged by a variety of substances (including high blood sugar) in the blood which stimulate inflammation and then damage at one or several sites in the heart muscle arteries, and this leads to a build-up of plaques as the damaged tissues calcify and harden. Whatever the cause, the lumen (central ‘hole’ of the artery through which blood flows) becomes partially closed, and eventually either a piece of the plaque breaks off and lodges further down the artery closing it completely, or the plaque grows big enough to shut off the artery completely. There are a number of different arteries which supplied the different components of the heart, with the arteries bifurcating (dividing into two) into smaller vessels, or sending off smaller vessels, thus theoretically a blockage ‘further down’ the ‘piping’ of the heart arteries causes lesser damage than if one of the initial ‘big’ arteries near where they enter the heart muscles to supply them is damaged. I was both unlucky and lucky in my particular blockage. I was unlucky for the reason that the blockage was in one of the main big arteries right at the start of where it starts supplying the heart – the Left Anterior Descending Artery – a blockage of which is known as the ‘widow-maker’, as only twelve percent of folks who suffer a blockage there survive, and most folks die before they can get help in any shape or form. I was lucky in that the artery was only ninety five percent blocked, which meant a little trickle of blood kept going through the blockage site, which probably was just enough to save me and my heart muscle from being damaged. I was also lucky in that it was the only vessel that showed any sign of blockage. with the rest of my heart arteries being absolutely fine, hence a single stent insertion at the blockage site was enough to cure the blockage and leave me with a well-functioning heart, hopefully for the rest of my life, however long it is.

As the heart arteries are increasingly narrowed, ‘warning’ symptoms occur, in particular chest pain, also called angina, which increases when exercising or stressed, and which disappears when resting. This pattern of angina is called stable angina, and folks feeling such chest pains when exercising (or at rest), should immediately seek help and visit their GP, who will likely refer them on to a cardiologist (a heart specialist), who will do a number of tests to determine whether there is indeed a blockage and how severe it is. Some cardiologists and countries recommend ‘watch and wait’ for stable angina, but others are less conservative, and would operate immediately dependent on obstruction metrics. When angina switches from being ‘patterned’ to occurring randomly, or while sleeping as happened to me, it is called unstable angina and is a more extreme warning sign. Some folks have no ‘warning’ symptoms at all, and the first time they are aware they have a heart problem is when they have a major heart attack. Classic symptoms of a full-on heart attack include crushing chest pains that can spread down the arm, to the chin or down the back, with difficulty breathing, dizziness and sweating, but one can also have ‘atypical’ symptoms, such as a pain in one’s stomach region, tightness in the chest, indigestion, anxiety and extreme fatigue, nausea and vomiting, or light-headedness. Most of these ‘diffuse’ symptoms are caused due to sympathetic nervous system (fight or flight response) activation, but one should be aware if one has several of these symptoms with no chest pain, they could be caused by heart artery narrowing or actual heart attack. One may think that anyone experiencing such worrying symptoms would rush to see their doctor, but this is not always the case. It’s difficult to determine how many folks have early symptoms of heart artery blockage – notably angina – and don’t respond to them, either due to denial, fear, or lack of knowledge of what it may be a warning sign for and of, because of the fact that so many folks die and therefore cannot give a prior history, or live and are embarrassed by their lack of earlier action in response to their symptoms. Men are particularly more prone than women to not seek help when having symptoms of serious illness, perhaps due to ego or other issues. My own history is a good case in point. A year or two before my heart attack, I started getting chest pains when cycling up big hills on my bike. At first, I thought they were due to chest muscle issues, and then when it was pretty clear (due to their repetitive and striking in the left side of the chest presence) that it was angina, I got scared and refused to go to the doctor, even though I was paradoxically a doctor myself. I remember riding up a hill from the sea promenade in Durban to my good friend James Adrain’s house higher up on the Berea hillside, and my chest pain became particularly sharp and radiated down my left arm, and while I was always much slower than him, I went at an even slower rate than usual to try and control the pain, and was relieved when I got home okay. Like most men I had not gone for a health check for about twenty years prior to suffering a heart attack, even though I used to get massive migraines at least once a week, but when I took up a job in New Zealand after a year of having angina symptoms, I was forced to have a full medical examination as part of getting a work visa, and it was found that I had very high blood pressure, though 24 hour blood pressure monitoring found this was stress related and high in peaks during the day (which probably caused my migraines) and returned to normal at night. Because of this normal night level blood pressure, we got into New Zealand okay, and we enjoyed settling down in that beautiful country. However, the angina pains continued each time I did sport, and then a few weeks before my heart attack, they worsened, suddenly occurring ‘out the blue’ at rest, and becoming much more severe during any exercise. The day before my heart attack, I was driving home and drinking a cup of coffee in the car, and I had what must have been a first heart attack, with severe chest pain which would not go away, sweating, dizziness and all the other symptoms of early heart attack. I just made it home, and like a real fool did not tell my wife and went to lie on my bed and was fortunate that the symptoms slowly went away. Curiously, my dog picked up that something was very wrong, as he normally disliked it when folk came too close to him, but when I was lying breathing rapidly on the bed, he came and sat right up near my head, and put his face next to my ear, in an obvious gesture of concern. Why did I, as a doctor in particular, not respond to all these obvious warning signs. Fear of dying, fear of operations, fear of being out of control, fear of any one of hundreds of things I guess, but I am still astonished at my own reaction, or lack of it, even though I know countless folk do similar, particularly men, and I guess I got very lucky I eventually did respond the next day, and woke up my wife, resulting in the actions described above, which have lead me to spend a normal and happy life for the five years past since the heart attack and stent operation.

The treatment of heart attacks (and angina) is relatively straight-forward. There is drug-therapy – techniques that can dilate the heart vessels and potentially clear out blocked vessels, but more importantly, there have in the last 50 years have been fantastic developments in surgical management of heart attacks which have made some of the greatest ‘jumps’ in treatment of what was previously an incurable event. Up until the 1960’s, if you had a heart attack, nothing could be done for you except to give you medications that would either dilate the heart vessels, or make the heart pump more strongly, but if one had a blocked artery, that artery stayed blocked, which almost always led to the heart muscle supplied by the blocked muscle being damaged and eventually dead, replaced by non-functional fibrotic tissue. This would cause the heart to pump abnormally, either due to damage to heart nerves which led to rhythm defects, or due to mechanical damage to the heart muscle which led to abnormal pump activity and eventually an enlarged heart as the uninvolved heart muscle tried to compensate for the dead muscle tissue, and heart failure ensued where due to a reduced capacity to pump blood ‘forwards’ around the body, it built up in the lungs and lower limbs – causing respiratory breathing difficulties (with coughing up of frothy blood as part of them) and swollen ankles and abdomen. These folk were called cardiac ‘cripples’, and it was only when the first heart transplant occurred in the 1960’s (in Cape Town in the hospital where I trained, of all places rather than the USA or Europe) was there a ‘cure’ for cardiac failure as a result of heart attack, or other heart pathology. But heart transplants were problematic and needed a huge amount of medical skill, intervention and ‘luck’ for folks having it to survive for any prolonged period of time, and an even better curative surgical development was that of Coronary Artery Bypass Grafting, where healthy vessels were ‘harvested’ from other parts of the patient’s body and ‘grafted’ (stitched into) the damaged heart artery on either side of the blockage, which allowed the blood to flow freely to the peripheral heart muscle beyond the blockage. This is still used today when scanning shows that several arteries are blocked at the same time, though for single heart blockages, coronary angioplasty (where the blockage is ‘scraped away’) and stent insertion (where a piece of manufactured graft is put across the previously blocked areas so as to keep the vessel lumen clear and functional) is surgically performed. Astonishingly, this latter surgical intervention is performed with the patient awake, and the surgery is performed in the heart via a cut made in the artery in the lower arm / wrist region, and long ‘bendy’ wires are pushed through up into the heart vessels, with a tip that either clears away the blockage or has the stent on it when it is inserted. I can’t deny this procedure to me was as terrifying as the heart attack itself, and when they cleared out the blockage, obviously for a short period the same heart artery was ‘blocked’ by the surgical wires, and it felt like I was having a heart attack again, with chest pains reminiscent of the heart attack itself. However, I felt fine immediately after the operation, was allowed home the next day, a week later was slowly going for walks, and two weeks later for slow bike rides. In many ways thus angioplasty and stent insertion are thus a miracle cure, and to me it is so sad to think of all those folk over the last few centuries who died as cardiac cripples, as I would have done, due to not having these amazing surgical procedures available at the time of their heart attacks.

Prevention of heart attacks is a challenging issue, but clearly the most important method to enhance survival rates is early diagnosis and treatment. Reducing body weight, eating healthily, performing regular exercise, treating high blood pressure and other disorders of ‘excess’ such as diabetes mellitus, appear to improve one’s chances of not having a heart attack during one’s life. But some very healthy-looking folks, who eat all the right things and exercise regularly still die of heart attacks, and clearly there is a genetic component to it, as well as factors that are still to be discovered. My father was one year older than me when he had a similar heart attack, though while I had only one vessel affected, he had three, and needed a ‘triple heart bypass’ operation to make him healthy again. While neither of us ate as healthily as we would have liked to, and carried too much weight, clearly it is not extreme to say something genetic got passed down from him to me that predisposed me, like him, to having blocked heart arteries at about the same age (in his case, as it goes, he died in his early seventies not from heart disease, but of cancer, so his bypass procedures lasted for twenty years and longer). In a clear genetic causation cohort, folks with familial hypercholesterolemia have very high cholesterol levels, and a high degree of blockages to the heart arteries, and related heart attacks.

Diagnosing acute heart attack is relatively easy with things like the ECG, echocardiography, coronary angiograms (I still have my pre-op and post-op angiograms which my treating cardiologist kindly gave me copies of) and other advanced diagnostic devices. But when folk come to a GP with chest pains, it is quite common for these to be caused by a variety of pathologies, and even anxiety can lead to severe chest pain. I remember a friend from kayaking days who was under a lot of stress had crushing chest pains when getting money out of an ATM and collapsed to the ground sweating and gasping for breath, and went to hospital in an ambulance, only to find there was absolutely nothing wrong with his heart, and with a discharge diagnosis of anxiety-related panic attack. Having said that, prolonged anxiety is a cause of, or at least a predisposing factor for heart attacks, due to the effect it has on blood pressure and the sympathetic nervous system, and in a number of folks these days, the management of a heart attack includes long term anxiety treatment to go with the routine aspirins, statins and other drugs all patients are put on when leaving hospital. In particular in women, there is occasionally no chest pain at all, with the patient rather complaining of shortness of breath as their main symptom of heart attack. Why this is, is currently unclear, but it does make treating early warning signs of heart vessel damage challenging, particularly given the high cost of diagnostic interventions like angiograms, and the time requirements of stress-ECG and even resting ECG testing. It is perhaps better to be over-cautious and check things early is now my mantra, but heck, who am I to say such, when I myself ignored the symptoms I had for so long, and so recklessly, given I had a young family that depended on me for their safekeeping.

One thing that is not often described as part of a heart attack is the psychological effects of having one and surviving. Chris Barnard, who performed the first heart transplant in Groote Schuur Hospital as described above, famously declared that the heart is just a pump. The reason for him saying this is perhaps because in society folk think of the heart as being more than a pump, and he wanted to demystify the heart for ethical reasons so as to allow him to do the first transplants. For many folks, for a long time in the past, the heart was thought to be the centre of a person’s soul. The heart was also thought to be the centre of one’s emotion’s (think ‘heart vs head’ in so many debates on whether to go with one’s emotions or whether to be ‘rational’ when making a decision), and is currently still the symbol for love, peace and all things good. Bravery is also related to the heart – think Richard the Braveheart, or the film Braveheart, where the character William Wallace, played by Mel Gibson, does things which require astonishing feats of courage – as also is love lost, or when someone has lost something or someone dear to them, where folks describe themselves as being ‘broken-hearted’. So, the heart is a ‘big thing’ in even modern society (even if the brain would argue that it should occupy the most exalted human organ space!), and if it physically ‘breaks’, that is a big thing for those it happens to. There is a high incidence of depression and anxiety in those that survive heart attacks, and even cases of post-traumatic stress disorder, as well as survivor guilt. I can’t deny that for several months I was ‘knocked sideways’ by having had a heart attack, particularly as being an old ‘sport-person’ who saw myself as being physically strong and usually always successful in all I tried to do, having a heart attack exposed something that was a ‘weakness’ in me of which I was not previously aware of, or at least that is how it felt for me at the time. For a short period after my heart attack, I experienced a lack of confidence, and a concern for what the future held, and what my longevity would be before the next heart attack. It did not help that for the first time in my life things had not gone well in my job in New Zealand (this may have indeed contributed to having a heart attack), and in the months after my heart attack I had to cut my losses with the work role I had there, and our family made the decision to return to the UK. I was fast moving up the academic leadership ladder and was hoping for my next role to be one leading or in a leadership team at a university (a role for which I felt very equipped for, and still do), but my cardiologist advised me to ‘step down the work ladder’ for a few years until we got control of my blood pressure, and my post-heart attack anxiety had passed. It took me a few years to come ‘right’ completely, as I now feel I am, but I will for life be on aspirin and statins, and medications for my high blood pressure, which is perhaps a good thing, and if I had been for more check-ups with my GP before my heart attack, I would probably have been put on some of these medications far earlier than the time when I had my heart attack. I would like to say that I ate healthily and lived well after my heart attack, but that was another curious thing – for a few years after the heart attack I had something of a ‘who cares what I eat, drink or do’ attitude and did what I felt like doing at the time of me wanting to do it, and it has only been in the last year or two that I have started to ‘feel’ that I have a ‘long term future’ again, and am again counting calories, exercising as much as possible, and trying to cut back on my beloved Jameson’s Irish Whisky. Apparently, this is not an unusual response after having a heart attack or other near-death experience, but it did take me by surprise when I realised this is what I was doing. There is thus a whole lot more we need to know and do for ‘survivors’ of a heart attack, and while the heart may just be pump, as Chris Barnard suggested, it is still thought of as a ‘venerated’ organ in even modern societies, and real damage to it leads to interesting psychological effects, which needs perhaps more research to understand how and why these happen, and how best to treat them.

As I sit in my person cave / garden shed / home office and write this piece, on a very sunny and warm mid-summer day, I hear birds chirping around me, feel a faint breeze cooling me somewhat as I write, and see the beautiful trees and flowers in the garden through the window and open door, and it feels good to be alive. I think of my wonderful family – great wife, two super teenage kids, three faithful hounds and three squawking chickens – all my large number of friends I have made over the years all around the world, and all the great colleagues I have worked with or attended school and university with, all of which make up the rich mosaic of life with which I live daily. I thank the spirits, fate, destiny, or whatever which kept me alive through that harrowing few hours when I thought it all over, and allowed me more years on this great place called earth, living this great and mysterious thing called life. When everything started going dark when I was lying on my bed waiting for the ambulance, bathed in sweat, with a crushing chest pain and struggling to breathe, I did not think of those I hated, or those whom I perceived had done me wrong to date, or of the ongoing feuds with folk that are part and parcel of most folks’ life as in mine. Rather, I though as hard as I could of my family and friends, and all the good things that had happened in my life – kayaking down rivers in South Africa, my medical training years, visiting many places round the world as an academic, the joy of writing, reading and learning thing in my garden shed, and above all, the great simple times I have had with my family and friends. If I had not returned to the light and life after the short period of darkness which occurred in the acute phase of my heart attack, these would have been the last thoughts I would have had before moving on to the next life, or to the nothingness of no existence which may perhaps be our dying fate, as similar thoughts must have been for all the folks who died suddenly of a heart attack, or indeed the last moments of their life before its end in anyone and from any pathology, accident or disease. Recently I have thought, since some time gap has occurred since my heart attack, that it would not be the worst way to die, compared to some of the long, painful deaths such as cancer, or heart failure, or many other pathologies which kill one slowly but surely. A sharp chest pain, a short struggle to breathe, then everything quickly going dark, then the end of everything one has been, or will ever be. Perhaps so. But please, whatever spirit, goblin, deity, holy spirit, or brooding devil which decides these things, can I have a few more years smelling the roses, wondering at the beauty of life, enjoying my friends and family, and hearing my wonderful hounds barking joyously at the gate as I arrive home, before my heart breaks again, not just metaphorically, but physically enough to send me shuffling off this mortal coil into the next world or into the deep, dark abyss of nothingness. Pump, please keep that blood pumping long into the future, and if you do, I promise I will be better to you than I was in the past!


The Fast-Moving Crab Both Is You And Destroys You – The Always Dreaded Diagnosis Of A Malignant Cancer Destroys All Your Dreams And Plans, And Changes Your Life Forever

A lot of folk wonder if social media and its rapid growth and development to become in contemporary times almost part and parcel of our daily life is a good or a bad thing. While this argument will doubtless continue for many years further, one positive chance associated with social media such as Twitter, is to allow the previously ‘unseen masses’ of society, whatever their age, background, intellect, or social standing, to have their say about anything they feel, or write a narrative tweet or series of tweets about their own life, and even if they only have ten followers, they all have an audience. One thing I have noticed more often recently is folks describing in detail on Twitter when they are not well, and what the cause of their illness is, from testing positive for Covid to coming ‘clean’ after an alcohol addiction. Its also noticeable that these health-related posts get a lot of likes and encouraging tweets when they are posted. Particularly poignant are the announcements on Twitter of folks who have had a cancer diagnosis and who announce this publicly to their followers, and then continue to tweet further over the next few months or years chronicling their small triumphs and tragedies, or describing how they are ‘fighting’ their ‘body invader’ and will not be ‘defeated by it’. Sadly, a lot of these Twitter ‘life stories’ end up with a note posted to their Twitter account from a partner or family member, saying that the person who usually wrote the tweets had died, as the final post of these tragic tales. Cancer has always been a subject of interest and concern to me, both as a person from a family with a history of cancer who has young children and a desire for more life to be lived, and as a medically trained doctor and scientist. As a teenage boy, one of my most vivid memories was of my father telling me the story of how his own father had died. He described how my grandfather had been driving into town – Durban in South Africa – with my father, who was in his early twenties, when he suddenly started vomiting up blood in the car. As my father told it, they only had handkerchiefs to clear up the vomited blood, and after doing so as best possible, he swopped positions with his father in the car and drove him to the nearest hospital. Sadly, medical tests showed my grandfather had terminal stomach cancer, and he died not long after this vomiting event in the car. What I remembered was the emotion and feeling my father had when telling the story, and it was obviously clear that both the shock of the experience of having the driver of a car he was in suddenly vomiting blood, and the knowledge that this person was his father who was seriously sick, had affected him for many years afterwards. Around twelve years ago my father, who lived six months of the year in the UK and six months in South Africa, boarded a plane to start a routine block of time living in South Africa. He had been feeling unwell for a few weeks before leaving, complaining of a nagging chest cold and a feeling of being exhausted all day, and a visit to his local doctor had resulted in a diagnosis of a chest infection for which he was sent home with antibiotics. On the plane he started coughing up bloody sputum, and this frightening development continued for the rest of the flight and after he had landed in Durban. He was immediately taken by my mother and siblings to hospital, and his chest scans showed the dreaded signs of late-stage metastatic cancer – big ‘cannonball’ lesions in both his lungs, and I think in his liver too – of a tumour that was finally diagnosed as being a melanoma (skin cancer), of which the primary site was never found. He went on a course of chemotherapy and what was then a new form of cancer treatment called immunotherapy, and these treatments made him feel frightful – nauseous, night sweats, itchy skin and other terrible side effects – but the treatments had no discernible effect on his cancer trajectory, and he died a few months later after a terrible time leading up to his death. I therefore clearly have cancer ‘in the family’, and there is a high chance I will one day get a similar dreaded cancer diagnosis that will cause me to ‘shuffle off this mortal coil’ in a painful and terrible way, unless a heart attack gets me first. In this article therefore we will examine cancer as an entity, and whether it can ever be ‘defeated’, and how it affects those that it does like a crab ‘locking its claws into the tissue of its victim’ with a deadly and ever tightening grip.

In the actions of the physical components of the body, which are necessary for maintaining life processes and functions, there are two things that are miraculous to me. The first is that from a genetic code perspective, one’s body grows by multiplying cells controlled by a genetic ‘map’, which over the years of childhood and adolescence increase in number and type until eventually a fully functional adult human is created, with a body that is astonishingly complex and with many different organs and tissues, and at an even deeper level millions of cells operate synchronously both spatially and temporally in order to maintain physiological function and keep us ‘alive and ticking’ in a manner which occurs in a similar way in all of us humans. The second miraculous event is that the genetic code and cellular structures know when to stop growing, and cells know when to stop dividing, when the genetic ‘map’ is complete in the physical realm and no further growth is needed. This is controlled by a still-not-well-known complex system of negative feedback mechanisms, either from suppressor genes which ‘switch off’ gene function, or by protein structures made by the genes themselves which similarly ‘switch off’ genes or reduce the expression of genes (i.e., stop it producing more proteins) when activated as part of an ‘epigenetic’ feedback loop (basically, non-genetic structures acting on genes to attenuate the production of further non-genetic structures by the genes). When a gene gets damaged, there are also protein structures – telomerases – which either ‘fix’ the damaged gene, or cause ‘senescence’ – cellular death of the structure made by the faulty gene. All these control mechanisms keep us well and in a steady ‘homeostatic’ state of optimal existence, which enables us to live our lives both healthy and cancer-free.

Unfortunately, when a cancer ‘begins’, something in one of these ‘suppressive’ mechanisms goes wrong, and the gene that is ‘reactivated’ by the lack of suppressive activity starts creating protein material at increasing speed, and eventually a visible growth of particular tissue activated by the faulty gene become evident, and causes pressure effects on surrounding tissues which often results in the first symptoms one ‘feels’ as a result of a cancerous growth, or by a still not well-known process, starts damaging and destroying cells around it by releasing enzymes or other activating mechanisms that cause cellular damage around and in the tumour tissue (a growing tumour that is well circumscribed is usually ‘benign’ and can only kill one by pressure effects, while if a tumour starts scavenging and damaging surrounding tissue it is generally called ‘malignant’, and is clearly a worse situation and worse diagnosis to have. Cancers are defined by the type of tissues which are multiplying abnormally – for example when epithelial cells (those cells that line the organs of the body) it is called a carcinoma, when lymph gland cells it is called a lymphoma, and when bone or surrounding bone connective tissues cells, it is called an osteosarcoma. If it is found in a specific organ, that organs name is added to the name – for example a lung carcinoma is a cancer of the epithelial cells of the lung, and a hepatocarcinoma is a cancer of the epithelial cells of the liver. For reasons that are still obscure to us, different cancers develop at different rates and kill folk more quickly or more slowly – for example pancreatic and liver cancer are particularly speedy and kill quickly, whereas prostate cancer takes years, even decades to develop. and several folks with prostate cancer live to an age where other age-related illnesses kill them. Similarly, some cancers respond better to treatment than others – for example a testicular cancer has a relatively high success rate from treatment, whereas lung cancer and melanomas once they have metastasized seem to have very low response rates to treatment. Having said that, it appears also dependent on the individual tumour themselves, with some folk lasting several weeks, while others last several months or years, with the same diagnosis and cancer type and stage.

The first million-dollar question associated with cancer (the second being how to treat it) is what causes the gene to mutate or the suppressor functions to ‘go wrong’ and begin the terrible process of abnormal cell division, malignant growth into surrounding tissue and eventually the death of both the surrounding organs and the human being eventually themselves. Many different causes have been suggested. Radiation of cells has been suggested to be a major factor in causality, given the number of folks who developed cancer after the atom bomb explosions in Japan during World War 2, or folks at Chernobyl and Fukushima and other places which have had nuclear disasters and subsequent radiation-induced damage to folks living in surrounding areas. Toxic agents like tobacco and smoke inhalation are also thought to be carcinogens (cancer causing substances or agents). Asbestos in building structures has been linked to cancer, as has exposure to Teflon coated products. Infective agents have also been linked to cancer, with several oncoviruses (viruses causing cancer) including human papilloma virus (predominantly cervical cancer), Epstein-Barr virus, Hepatitis B or C, and many others are linked to cancer development, as have infection with parasites such as schistosomiasis – which is more commonly associated with bilharzia – suggested to cause carcinoma of the bladder. Many types of food have been thought to be linked to cancer – for example eating betel nuts being linked to mouth carcinoma – and examples of whole population studies of countries whose people ingest country-specific diets such as the Japanese who have a high salt and high fish and rice diet show them to have a higher incidence of stomach cancer, while folks in the USA, who have more red-meat in their diet, have more bowel cancer than the Japanese, though in contemporary times, as Japan changes to a more Western diet, these differences are becoming less. Interestingly, folks who immigrate to a new country tend to pick up the same levels of cancer types prevalent in their new country rather than in their old country within a generation of living there. I have to say I am a great believer in Occam’s rule in science, which suggests that the simplest answer to any question is usually the best, and when there are so many suggested causes of cancer initiation, I do believe this means we have no real idea of exactly what triggers cancer development definitively (at best these data show that many factors could be involved), and research done by future generations will give us better answers then we have now, and perhaps also provide us more peace of mind by reducing things we currently worry about as being potentially causative of cancer such as those described above.

Whatever its cause, the most important beneficial outcome of a cancer diagnosis is identifying the tumour as early as possible. A major problem though for early identification is that most symptoms of cancer are generic (could be caused by several different illnesses and pathologies), and by the time one has specific symptoms, the cancer has become late stage and thus more difficult to treat successfully. Paradoxically also, because most folk are so scared of having a cancer diagnosis, they put off getting a check up on worrying symptoms or signs, and go for a medical check-up when it is too late, or when a major medical emergency results from the primary tumour or from one of its metastatic sites, such as a seizure from a brain tumour, or like my father, coughing up blood on an aeroplane. The early signs are often nebulous – again, in my father a cough that wouldn’t go away, in bowel cancer weight loss and bowel habit changes which are ignored as being caused by old age or stress, or in prostate cancer signs of changes in urinary habits, again, that are often put into the ‘I’m getting old’ category of causality and ignored until too late. Often cancers are picked up at routine checks or when one needs a medical certificate of wellness for travelling or moving to another country, or other such unexpected occasions. In young folk and children, it is even harder to diagnose early, as they have so many aches and pains which are generally regarded as ‘growing pains’, and children are often sick from respiratory and gastro-intestinal infections, which also confuse diagnosis. When a diagnosis of cancer is suspected, a medical practitioner will look just not at the local symptoms, but will do whole body scans or a wide variety of blood tests, to see whether it has spread around the body, and the cancer is ‘staged’, with stage one being very localized and more easy to treat, and stage four being very advanced and spread to several body areas, and obviously stage four has a far worse prognosis for successful treatment than stage one – but again, as so few folk go to the doctor with mild symptoms, it is sadly often the case that cancer is picked up ‘too late’ / at a stage three of four level. We will discuss cancer treatment and its problems shortly, but sadly, while there has been some improvement in cancer survival rates over the last fifty years, most of this improvement has been due to earlier diagnosis and more energetic screening programs, rather than due to successes of the treatments currently available themselves.

The mainstay of treatment of nearly all tumours (except of course blood-related tumours) is surgery. Particularly if one has a stage one tumour which is well circumscribed and where there is no evidence of tumour metastasis, one has a good chance of surgical removal of the entire tumour occurring (usually incorporating some healthy tissue around the tumour as a precaution), resulting in a successful ‘cure’, at least in the short term. Sadly though, cancer is the Latin word for ‘crab’, and cancer is named such because like a crab it is not often round and well circumscribed but rather has ‘claws’ of malignant tissue reaching up unseen to the naked eye into surrounding tissue, or spreading to lymph nodes, or to areas further away, and surgery may miss some of the cancer, with the predicable disaster of the tumour growing back at some time point after the initial operation. Cancers also for some reason still not identified cause growth and enlargement of blood vessels into their centre (and have increased level of glucose metabolism which occurs via unusual metabolic breakdown pathways – known as the Warburg effect), which again makes them look like a ‘crab’, and sadly, increases the risk of the cancer cells breaking through into the blood stream and moving off into other areas of the body as metastatic ‘islands’ of tumours, which can grow big themselves in the secondary tissues or organs they lodge in (the second main way metastasis happens is via the lymph nodes). I will never forget when I was a final year medical student at the University of Cape Town, watching a breast tumour removal at Groote Schuur hospital performed by one of the world’s leading breast cancer surgeons, Professor David Dent, from a patient from a rural area of South Africa who had ignored a breast lump, and when David operated to remove it, the cancer was the size of a cricket ball, if not larger. What struck me most though, and which has been the source of several nightmares I have had, was that there were huge blood vessels coming out of the cancer centre (or going in), and David had to spend more hour’s resecting (cutting and tying off these vessels so they did not keep bleeding) these vessels than he did removing the tumour. Even as one of the best surgeons in the world, he could remove the tumour perhaps better than most other surgeons could, but even he could not save the patient, as sadly she also had metastatic lesions all over her body emanating from the breast cancer, and was sent home for palliative rather than curative care a few weeks later.

The other main types of treatment are radiotherapy and chemotherapy, and in more recent times, immunotherapy (stimulating the patient’s own immune system to ‘fight’ the cancer) has become used as a treatment source in some tumours. But, sadly radiotherapy and chemotherapy, which focus on using either radiation to ‘burn out’ the tumour, or which kills the tumour with drugs made up of different drugs which are toxic to the cancer tissue, have the problem of being somewhat ‘blunt’ treatment regimens, and often damage normal tissue, and have many negative side effects themselves, as described above in the first paragraph regarding my father’s cancer treatment and death. They do seem to work better in some cancers compared to others, but as described above, it is not clear how much their use has improved cancer treatment success, beyond that generated by early diagnosis via improved screening methods. I have to be honest and say at this point in time, when I am well, hale and hearty, and having learnt a lot about these treatments, if I had a late stage cancer diagnosis, I am not sure I would sign up to have either radiotherapy and chemotherapy like my father did, due to their major side-effects and because they may give one at best a few months more time alive by reducing the size of the tumour, but those few months extra will be lived with all those negative side effects associated with their use. But, if one of my children or my wife had a cancer diagnosis, and a good doctor I trusted said that chemotherapy or radiotherapy would help them / give them a chance, I probably would not be so bold (or foolish) and would allow the doctors to give these treatments a try, even if the data on their success rates (or rather lack of success rates) is to me appalling. I am sure most folk don’t share my negative perception of chemotherapy and radiotherapy, and one must remember that folk like the world champion cyclist Lance Armstrong was diagnosed with a stage four cancer with metastases in his brain, liver and other organs, and survived after being treated with a combination of surgery, chemotherapy and radiotherapy, and more than fifteen years later after initial diagnosis is still cancer free – though of course he had testicular cancer, which as I have said above, seems to ‘respond’ better to the available treatment regimens than other types.

One of the saddest issues related to folks who develop cancer is that they are often ‘preyed upon’ by ‘quacks’ selling strange supposed ‘cures’ for their cancer and given that they have such a strong desire to live, they often accept these offers of assistance, and pay large sums of money for treatments with no benefit whatsoever, and even use these in place of more conventional radiotherapy or chemotherapy and even occasionally surgical management. A case in point was the actor Steve McQueen, who was diagnosed with a lung mesothelioma with secondary tumour spread to his abdomen and other parts of his body, and who refused conventional treatment and was treated by a ‘quack’ using bizarre interventions like coffee enemas, frequent shampooing of his body, injection of the ‘live’ cells of sheep and cows into his body, and other bizarre treatments, for which he paid thousands of dollars each month. The ‘quack’ boasted he would be cured shortly, but of course these ‘treatments’ had no effect whatsoever on the cancer, and McQueen’s primary and secondary tumours grew rapidly and he died not long after receiving these treatments. There are also a number of ‘quacks’ littered across social media who suggest that they have come up with either a diet, or way of life, or some strange therapy, that if used by healthy folk on a continuous basis, will prevent the subscriber who uses their products from developing cancer. There is sadly no real evidence for any diet or way of life being more protective than others (apart from stopping smoking), and just about every food type has at one time been linked to cancer, or paradoxically, has been punted as being curative of it. Fear of death and paranoia regarding the potential causative effects of daily life on cancer genesis probably drive folks to take up these advertised ‘anti-cancer’ lifestyle choices, which are costly and generally demand a high level of asceticism, but sadly with no real proven benefits in nearly all cases. Sadly, it is very difficult to stop these charlatans and quacks from ‘plying their trade’ on a group of folk who are prone to believe them due to the challenging and terrifying situation they find themselves in when receiving a cancer diagnosis, and one can only be hopeful that in the future, legislation can be passed to prevent the activities of these pathological purveyors of nonsense, but given as described above, the traditional methods of attempting to cure cancer are so tenuous and statistics of cure so bad, that there will always be believers in their nonsense in cancer sufferers who are looking for anything that will give them hope.

Another unfortunate way folks with cancer are ‘preyed upon’, very sadly, is by some cancer specialists, who inflate the price of drugs needed to ‘treat’ cancer, and then encourage folk even if they have cancers with little reactivity to any treatment, such as melanomas and pancreatic cancers to use these inflated cost drugs, knowing that the folks with cancer will do anything, and pay anything, to be given a chance to live longer or ‘beat’ the cancer and re-attain their healthy state. Again, very sadly, parents with children diagnosed with cancer are another group who are ‘easy prey’ for avaricious cancer specialists who offer a cocktail of drugs and therapies at inflated prices. There is nothing much that can be done to attenuate what is for me terrible practice from medical practitioners involved as while it is surely immoral, it is not illegal, other than to be sure to ask the treating physician if the treatment they suggest is the lowest price treatment, or whether another drug of similar efficacy but less cost is available, even though one may worry that one is asking for inferior treatment. With all the above examples, it is obvious that there are challenges facing folks recently diagnosed with cancer beyond just managing the cancer diagnosis itself, such as being sure one accepts help from folks one can be sure have one’s best interests at heart and avoiding the charlatans and sociopaths who cross your path and try and make a profit out of your misery and need to feel hope.

Whatever the cause, or type, or stage, of cancer, being diagnosed with it is one of the most challenging things a person can have to deal with in their life. Even waiting for test results to come back for cancer scans can cause dramatic psychological upset, fear and anxiety, and a diagnosis of cancer for most folk is truly a life changing event, with everything one planned for in the future being ‘thrown out the window’, and suddenly one has to deal with one’s own mortality as a real rather than ‘hypothetical but unimaginable’ future event. There are very high rates of depression and increased incidence of suicide in those diagnosed with cancer. Folk can go through the Kubler Ross stages after being given a cancer diagnosis – denial, anger, bargaining, depression, acceptance – but the progression between these is not always linear, and for most folk life is from a mood perspective, a very much fluctuating ‘event’ post-diagnosis, given that there are periods of hope, periods of great illness when having chemotherapy and radiotherapy, and periods of despair when told treatments are not working as they should, and even when someone is one of the fortunate folk who get told they are ‘all clear’ of the cancer, sadly it returns in so many folk that one cannot feel ever that one is completely ‘cured’, and folk in remission always know that it could return, and have high levels of anxiety for every symptom they feel or sign they see in their body which may indicate their cancer is returning. An interesting psychological occurrence is that most folk tend to externalize the cancer as something that has ‘attacked them from outside’ and that it is not of their body, perhaps because it is difficult to ‘declare war’ on one’s own body when it has let one down so badly as it has when cancer develops from and in one’s own organs. Sadly, most folk, whether a few months later like my father, or several years later for folk with prostate cancer, will be told by their doctor that nothing more can be done to treat their cancer, and that their care going forward will be palliative – relieving pain and complications of the cancer, rather than trying to cure the cancer itself – and that such folks need to prepare themselves for their imminent death as best they can. As it goes, with good palliative care, the end stage and dying process can become relatively positive, given one can give morphine / painkillers as needed, and ensure the last days of the cancer stricken person to be as easy as possible as they say goodbye to their close ones and to their own life itself, though of course, even if folk die bravely and pain-free, nobody wants to die, and it is a shattering process both for the person dying, and perhaps even more so for those who love them and have to watch them die.

Due to a family feud, I did not see my mother for seven years, until two weeks before she too died of cancer in her mid-seventies. I flew across from New Zealand, where I was working then, to Durban, to say my goodbyes to her, and had a final cup of tea with her, and we cried together and I said my goodbye to her and left, never to see her again. She was so thin and delicate, with a belly swollen tight with fluid due to secondaries in her liver and abdomen, and each word she spoke was difficult for her to get out – the cancer destroyed her physically and she was a shell of a human being when she died. I remember also the last time I saw my father twelve years ago – he gave us a huge wave and his customary ‘yay for the good guys’ shout as we drove away from his house in Durban, trying to stay positive even though he was gaunt, pale and sweating from the treatment cycle which he was currently on. I remember being called when he died and thinking both how sad it was, yet relieved that he was peaceful again, after the terrible last few months when he was ‘fighting’ the cancer that killed him. Given my family history, I am sure I too have this terrible diagnosis and end of life ‘program’ ahead of me (though of course while no way of dying is pleasant, to me there are far ‘worse’ ways to die than by cancer, for example Locked-In syndrome, where one’s whole body is paralysed yet one’s mind is fully cognitive to the end, is to me the absolutely ‘worst’ way to die). When I was young, this knowledge used to make me wake up at night sweating with fear, but paradoxically the older I get I seem to have come more to terms with it, and await my outcome and final death diagnosis, whatever and whenever it will be, with a greater degree of stoicism. However, having said this, even with my experience of being a clinician and researcher myself, I still occasionally don’t go to the local GP for a check-up even if I have symptoms that bother me, so at a deeper level I am surely still scared of a cancer diagnosis, even though perhaps not as much as when younger. As described above, even though medically trained, I must be honest and say I would like to think I would refuse chemotherapy and radiotherapy if diagnosed at a late cancer stage and use the last bit of time to enjoy life as much as possible with my family and friends, until the final day of my life arrives and I take the journey into eternal darkness, but let’s see what happens and what decisions I make when a bad cancer prognosis becomes my own fate. As a medical student we had a lecture on cancer given by an old German doctor, who said that if you have a patient with stage four terminal cancer, why stab them with needles each day to check on the status of their blood parameters, and by doing so trying to causing them a lot of pain, only to potentially save them for at best an extra month or two – rather send them home to plant the next crop of corn, and get things ready for life’s next cycle (meaning get one’s affairs in order for one’s children and their future) and I have never forgotten this, and hope when my time comes I will do such. For many of us the crab is floating in the shallow waters just beyond where our toes are dipped in the water, and if or when it bites, it is surely the start of a turbulent journey across the bay to the land on the other side…


Anorexia Nervosa And The Eating Disorders – A Tragedy of Faulty Mirrors, Control Or the Lack of It, And A Walk Back To The Abyss

I was in a gym last week and noticed an emaciated women running on a treadmill, who was so thin that individual muscles and bones were visible in her exposed flesh around her gym clothes. I wondered if, wearing my clinical hat, I should speak with the gym staff about her, given it was clear she either had a chronic disease that caused profound secondary weight loss, or she had an eating disorder, most likely anorexia nervosa. I have noted with concern when watching cycling races how thin elite professional cyclists are during long stage races, and was interested when reading the autobiography of one of the world’s best cyclists that they believed that both themselves and several of their cycling colleagues would probably satisfy the criteria for a full-blown eating disorder diagnosis, and that they had battled with food ingestion both during their career, and even after they had stopped being competitive. As a teenager I was sent to boarding school, and didn’t settle well into the strong routine and rules based environment that boarding schools require in order to function, and I stopped eating as a ‘silent protest’ to get attention to my dislike for my environment. I eventually cut my weight to almost zero body fat, and folk wondered if I had a bone fide eating disorder, though fortunately when my parents accepted that I could not continue at the boarding school in my state of refusing to eat and resultant massive weight loss, they took me out and put me into a day school, and almost immediately I started eating again, my weight normalized, and the problem was pretty much resolved for me. Seeing the lady on the treadmill, reading the book on the eating travails of the elite cyclist, and reflecting on my own weight reduction story of my youth, got me thinking about anorexia nervosa, what causes it, and why some folk both start to refuse to eat food, and continue to do so, even if it causes them to literally starve themselves to death in an environment of plenty, and with so many of their loved ones around them willing them to eat normally, put on weight, and live a ‘normal’ life as they apparently used to do.

The symptoms and signs of anorexia nervosa were first described in medical texts as early as in the 1600’s, and was termed anorexia nervosa in the late 1800’s. The term is Greek in origin, with ‘an’ describing negation and ‘orexis’ describing appetite – so literally a psychological negation of appetite. Its classical symptom is obviously food restriction resulting in rapid weight loss, and it can be accompanied by compulsive behavior such as excessive exercise (in order to use up calories and thereby lose weight), a paradoxical preoccupation with food, recipes, or cooking food for others which is not consumed by themselves, food rituals such as cutting food into small pieces and not eating it, refusing to eat around others or hiding and discarding food, and purging themselves with laxatives, diet pills, or self-induced vomiting in order to attenuate the effect of eating any food whatsoever, no matter how small the portion (to note these purging actions also occur in its ‘cousin’ disorder, bulimia nervosa, but there is usually not the marked food restriction in bulimia nervosa, and weight loss may not be evident in folk suffering from bulimia nervosa). There are a number of other signs which are diagnostic of anorexia nervosa, including low body mass index for one’s height or weight, amenorrhea in females, the development of lanugo (fine, soft hair growing over the face and body), intolerance to cold, halitosis (bad breath), orthostatic hypotension (low blood pressure when lying down), chronic fatigue, and changes in heart rate (either slowing down or speeding up). But, most of these may be related to the chronic and extreme weight loss, rather than to anorexia nervosa per se. Clinicians have to be very cautious before diagnosing anorexia nervosa to be sure to exclude a wide variety of clinical disorders that can lead to profound weight loss, including cancer, type 1 diabetes, thyroid hormone disorders, and a host of other clinical conditions. Anorexia nervosa is thought to occur in approximately 1-4 percent of females, and 0.5 percent of males, and often begins during the teenage or young adulthood years.

There is much debate still about what causes anorexia nervosa. There has been an increased incidence of the diagnosis of anorexia nervosa in the last 50 or so years, and this increase has been correlated with increase in social pressures, particularly on females, but more recently on males too, for the ‘ultimate body’, with most cultures increasingly favouring a slender shape and the ‘size zero’ model, where clothes and fashion are displayed on waif-like models. This is theorized to put pressure on most folk to be thinner than what is possible for the vast majority of people. But, correlation is not causation, and the counter-argument to this social theory would be that 96% of women and 99.5% of men see similar fashion images and models and do not develop anorexia nervosa. There is a strong familial link to it, with twins and first degree relatives of someone diagnosed with anorexia nervosa having a significantly higher chance of developing the disorder. It has also been suggested that the prevalence of anorexia nervosa is higher in athletes doing sports that require weight control, such as gymnasts, runners and cyclists, and in those folk whose careers similarly require weight regulation, such as ballet dancers and jockeys. It has also been suggested that folk with gastrointestinal disorders such as inflammatory bowel disorder and coeliac disease may have a higher prevalence of anorexia nervosa, due to the increased requirement to be ‘aware’ of what food types are ingested if suffering from other of these challenging gastro-intestinal disorders, and indeed eating any food whatsoever may initiate their symptoms. There has been an increase in interest in ‘extreme’ diets such as the keto, carnivore, and vegan diets, amongst others, and it has been suggested that engaging with such diets may precipitate the development of anorexia nervosa, or indeed be a ‘mask’ for those with eating disorders to ‘hide’ behind as a label that would allow them to explain their weight loss and extreme thinness in a way that was more socially acceptable than telling those around them that they have anorexia nervosa, or allows folk suffering from the disorder to feel part of a group of similar food conscious folk.

Anorexia nervosa would be easy to diagnose, treat and manage if the disorder was as simple as that described in the paragraphs above. But, a major confounding issue is that a high percentage of folk who suffer from it deny having anything wrong with themselves, deny having an eating disorder, and some resist being treated to the point of requiring to be restrained and force-fed to keep them alive. It sounds terrible that folk have to be force fed against their wishes (and some doctors have an ethical problem doing so), but unfortunately anorexia nervosa has the highest mortality rate of any psychiatric or psychological disorder, around 10-12 times that of the general population, with the risk of committing suicide being 50 times higher. Anorexia nervosa sufferers literally starve themselves to death, or commit suicide while doing so, and there is a high recidivism rate, with only half of the folk who have it ‘recovering’ (if they ever do), with the rest relapsing or becoming chronic and a permanent ‘way of life’ until death intervenes. So something is clearly desperately ‘wrong’ in these folk, who either know about it and acknowledge it, know about it and don’t acknowledge it, or not know that they have the disorder and perceive themselves to be well – clearly the latter group being the most challenging to treat, though all three groups require major psychological assistance and intervention. Anorexia nervosa is classified under Feeding and Eating Disorders in the ‘bible’ / official manual of Psychological disorders (known as the Diagnostic and Statistical Manual of Mental Disorders – DSM5), but there is a high prevalence of other associated psychological disorders, including obsessive-compulsive disorder and obsessive compulsive personality disorder, anxiety disorder, and depression. An array of other psychological disorders have also been linked to anorexia nervosa, including borderline personality disorder, attention deficit hyperactivity disorder, autism spectrum disorders, and body dysmorphic disorders, and while some of these linked disorders require further research to understand their prevalence and linkage to anorexia disorder, it is thought that having these comorbidities worsens the prognosis for folk suffering with florid anorexia nervosa.

With all these challenging psychopathology and comorbidity factors, three key issues appear to be fundamental to anorexia nervosa. The first is precipitatory factors, the second loss of interoceptive (body state) awareness, and the third perception of loss of control in folk with the disorder. It is thought that a stressful incident in one’s past life, or a change in circumstances for an individual already predisposed to develop the disorder, or by being in a sport which requires weight regulation, can precipitate the development of anorexia nervosa in a susceptible individual. Sadly, a number of folk who develop anorexia nervosa have a history of childhood trauma, including abuse, parental divorce, or a conflict-filled environment, and a study published last year found there was a twenty five percent incidence of sexual abuse during childhood reported as occurring before the onset of anorexia nervosa. Equally, a change of environment that is challenging, such as moving geographically, or going to boarding school, or the death of a parent or sibling or loved one, or being teased about one’s body shape in childhood or adolescence, may also be precipitatory factors. While difficult to prove direct linkage as a response to these psychologically ‘shattering’ events, what appears to happen as a result of these traumatic challenges is that a process of ‘disembodiment’ occurs (also described as ‘interoceptive loss’) where one’s body image is altered, or one does no longer ‘recognize’ one’s current body image, perhaps as a way of ‘denying’ the trauma that was done to it as would have occurred as a result of being sexually abused for instance. It has also been suggested that folk with anorexia nervosa undergoes a ‘loss of emotional self’, where one no longer recognizes ones emotions and feelings, in a similar way, and for similar reasons described above, as why one no longer recognizes one’s physical body. Lonnie Athens, one of my most admired Psychology researchers, has suggested that one cannot have too weak a sense of self, as one would not be able to have a stable sense of self-identity if so. However, he suggested that with a profound change, such as being the victim of a violent episode, divorce, loss of a job, or some other profound experience, for which the individual has no prior frames of reference or experience of, and which their current self-identity system (whatever this is) cannot provide interpretation of or makes sense of, resulting in changes to the sense of self similar to that found in folk with anorexia nervosa. In his model, the factors which set the sense of self in the brain become confused in a crisis for which there are no previous precedents to benchmark the current experience against. Because of this, the sense of self become fragmented, and replaced by a different sense of self, if the individual is able to make sense of what happened to them and ‘move on’ from what happened, but if not, remains permanently fragmented, and folk develop into a permanent state of sense of self ‘flux’, in which the individual is to a degree unrecognizable to themselves in a manner which becomes habituated.

The underpinning issue of all of these factors, and the ‘unfinished’ response to them, is a sense of loss of control. The individual who has suffered life trauma, and the resultant ‘shattering’ of their sense of self, no longer feels in control of their life and situation. One thing that can be controlled is their food intake, and in the case of folk who develop anorexia nervosa, they stop eating due to the desire either to control one factor in their existence which they can have complete control over, or actually do want to starve themselves to death, as a compensation or ‘way out’ of their current psychological ‘fragmentation’, and they stop eating as a way of enacting a prolonged suicide. For this reason (amongst several other reasons), folk with anorexia nervosa do not believe that they have an eating disorder, as by limiting their food intake, they feel that they are for the first time since whatever precipitated the development of the condition of anorexia nervosa, and are ‘controlling’ something in their own body, and it would be too psychologically draining for them to ‘lose’ what becomes in effect a control ‘mechanism’ that they feel completely ‘in control of’. At the same time, the individual may deny or ‘forget’ that they ever had any abuse or traumatic episode happen to them, as it is too threatening to their already damaged psyche to admit such, and as a mechanism of psychological denial, they ‘remove’ it from the conscious self to the nether world of their subconscious, where it continues to ‘trip them up’ and damage their lives until they confront it. So paradoxically, folk with anorexia nervosa can be all of completely out of control (from the context that they have been ‘ripped up’ psychologically by prior damaging events), in control (from the context of what they choose not to eat), and in some folk not aware of (due to protective psychological denial processes) their current physical state, sense of self, or underlying psychopathology. The distortions of body image, and dread of adding on even a pound of weight, are a result of these three ‘issues’ (amongst others) at play in their deep psyche and if this ‘lack of awareness’ is not addressed, the person with anorexia nervosa will literally starve themselves to death, as a method of maintaining control, or of negating the damage caused to them of whatever lead to the development of their condition. Therefore, in many cases, folk forget that anorexia nervosa can be a symptom of some deeper psychopathology, which may or not be ‘hidden’ from view, apart from it being a psychopathology in itself.

All of these ‘entangled issues’ make anorexia nervosa extremely difficult to treat. If folk deny that they are sick, it becomes very difficult to treat them, as whatever one does, they will not ‘stick with’ the treatment offered. Admitting that they are sick, or that they have a mental illness, requires them to acknowledge the psychological damage underpinning their anorexia nervosa (with anorexia nervosa being a symptom itself), and that they have no control over their life, and indeed that their own ‘treatment’ they have chosen to ‘cure’ their underlying psychopathology such as sexual abuse or other issue, namely not eating, or controlling their eating patterns to an extreme degree, has been wrong. A large number of folk with anorexia nervosa want to be left alone, and find it difficult to cope with being diagnosed as having anorexia nervosa due to the requirement this would make on them to confront their underlying psychopathology, and indeed, sadly, being diagnosed as suffering with anorexia nervosa creates a stigma of its own, and may ‘label’ and define them for life as such, and this is yet another psychological challenge to accept in their challenged state (that they have both an eating disorder and also underlying psychopathology) for the folk suffering from it. Treatment of the disorder involves trying to restore the person to a functional weight, treating the underlying psychological disorders that led to the development of anorexia nervosa, and reducing behaviors and activities that result from the disorder becoming habituated (such as not eating in front of other folk, hiding food, as well as of course not eating at all most of the time). Psychotherapy, cognitive behavioral therapy and family-based therapy have all been used with varying degrees of success to treat folk with the disorder. However, given that eating is a basic requirement of life, each day is an ordeal from the context that eating is required to happen in order for life to continue, and at each meal there is thus a conflict and habitual cycle of negation that is very difficult to alter or attenuate. Force feeding has been used in extreme conditions, but there are of course ethical issues associated with this, such as the individual’s rights, though of course the debate is whether folk with anorexia nervosa are in a ‘right’ state of mind and / or can make decisions that are good for themselves, rather than being damaging for themselves (similar issues of treatment occur in folk with alcoholism or who self-harm, amongst others). In the end, as described above, it is a disorder that is extremely challenging to manage, with a high level of chronic morbidity and mortality, very much as if the folk with the disorder ‘want to die’, as challenging as it is to describe it as such, or as a clinician to understand it as such.

Anorexia nervosa, for the reasons described above, is one of the most challenging disorders clinicians, friends and families of people that have the disorder have to manage and live with. To all of us watching the lady in the gym last week it was clearly obvious that she had a severe case of anorexia nervosa, yet she appeared oblivious to this, and was indeed working hard in the gym to ‘maintain fitness’ (thought of course in this case the exercise may be both a symptom and vehicle of psychopathology itself). In many ways society advocates restrictive eating and thinness, and from this perspective, the lady in this example would be congratulated for doing this to such extremes. Yet, paradoxically, someone like this who appears to be so in control, is actually completely out of control, and even more strangely, in many cases is not aware of it. The ‘mirror’ / self-image assessment function in their brains, however it works in both health and disease, appears to warp and become convex, and to the anorexic, their body image is usually so ‘fragmented’ that they see big where they are indeed thin. Even more sadly, as a clinician seeing folk like this makes one wonder what traumatic event the individual has gone through to trigger their anorexia nervosa, and whether they hopefully are in counselling to try and help them get to terms with whatever issue is causing their ongoing self-harm. Sadly thought, this is so difficult to do, as control mechanisms are involved which have become extreme, and make them unable to ‘move on’ from a life spent in the shadows to one in the light. Sigmund Freud suggested that folk have both a life instinct (Eros) and a death instinct (Thanatos), and one’s Thanatos instinct compels us humans to engage in risky and destructive behaviours that can lead to our own personal death. Clearly, in folk suffering from anorexia nervosa, a trigger has changed their ‘behavioral setting’ from Eros to Thanatos, and once changed, the God of Death appears to be fairly resistant to change. Much research is needed in the field to help us understand the psychopathology underlying anorexia nervosa better, and how to treat, or at least manage the condition. Mirror, mirror, on the wall, am I the thinnest person of them all may be the mantra of this disorder, but sadly the mirror is telling a lie. Each time I put in the two spoons of sugar which I enjoy my tea, I thank my parents for removing me from boarding school, before a fast of defiance became overwhelmed by Thanatos, the God of Death, and before I too went down to the place where everything which seems real is not, and where once one is over the edge of the abyss, there is very little chance of ever coming back. Dying by starving oneself to death may paradoxically represent a victory to the individual that pushes themselves to their own death, but the fight between Eros and Thanatos in a loved one, when Thanatos wins, is surely one of the most tragic things a clinician, family member or loved one can ever watch from a distance and understand, or even begin to comprehend, without wanting to smash the distorted mirror, and by doing so rebuild the fragmented spirit underneath it, no matter how impossible in real life this is.


The Self As Soliloquy – The Mind’s Inner Voices Make A Winner Or Loser Out Of You During Exercise And Competitive Sport

I was watching the highlights of an Australian Open warm-up tournament a few days ago, and noted how players often spoke aloud to themselves during the game, either congratulating themselves, or telling themselves to keep on going, or being critical of themselves when making an error. I have been trying to keep cycling through the Christmas break, even though it has been pretty cold and occasionally icy in the North-East UK, and I have had to have conversations with myself (internally, rather than out loud like the tennis players) both to get on the bike when sitting in front of a warm fire with a good book seemed a better option, and when I was out on the cycle path, and my toes and fingers felt frozen, to not stop, and keep on going. I have always been aware of the inner dialogue that continues incessantly in my mind throughout the day, either thinking of a science puzzle that I can’t work out, or how best to sort out a challenge at work, or being reminded by an inner voice to get presents for the family for Christmas, amongst a million other discussions I have with myself, and I am sure each of you reading this is aware of these inner voices similarly. Curiously, there has not been a lot of attention paid to inner dialogue or the inner voices, which is surprising given how central one’s inner dialogue is to one’s life, and are indeed a constant component of one’s life of which one is usually very aware of. Even less work has been done on the effect of inner voices, either positive or negative, on athletic performance (or indeed any type of performance, be it sport, work, or any activity which puts stress on one), or indeed if one’s inner voices alter during either competitive sport or exercise participation. A few years ago, I worked with one of the absolute legends and mavens in the Sport Science academic community, Professor Carl Foster, from Wisconsin in the USA, in order to try and understand a bit more about this curious yet fascinating subject, and we eventually published a theoretical review article on it a decade ago. All of these recent observations reminded me of this article we published, and the role of inner voices and the inner dialogue they create, and how this inner dialogue affects, and is altered by, competitive sporting activities and challenges.

Inner speech has also been described as self-talk, private speech, inner dialogue, soliloquy, egocentric speech, sub-vocal speech, and self-communicative speech, amongst others. Inner speech is predominantly overt during early childhood, and children up to four years of age believe that a mind of a person sitting quietly is ‘not doing anything’ and is ‘completely empty of all thoughts and ideas’. With increasing age, and associated increasing self-awareness, children reduce the quantity of overt inner speech, particularly when in large groups or around teachers, until overt inner speech only occurs when the child is alone, due to them becoming aware of the social consequences of unchecked overt inner speech. This change of inner speech from overt to covert appears to be related to appropriate physical and cognitive developmental changes, as children with Down’s syndrome continue to use overt inner speech, and folk who are Schizophrenic also use overt inner speech, and indeed feel that their inner speech is generated ‘outside’ of their heads and by an external agent, and often feel tormented by the ongoing dialogue which to them appears to be ‘outside’ of their minds. In adolescents, increasing negative or self-critical inner speech has been related to psychological disorders such as depression, anxiety and anger.

As described above, the makeup and function of the inner voices during sport have not been extensively researched previously. However, Van Raalte and colleagues examined overt inner speech in tennis players, and found that a large percentage was negative, and that there was a correlation between the quantity of negative inner speech and losing, which was not present between positive inner speech and winning, a somewhat puzzling finding. The laboratory group of father and son academics Lew and James Hardy have done some excellent work in this field. A study by their group, first authored by Kimberley Gammage, where they looked at the nature of inner speech in a variety of sports, found that 95% of athletes reported they used / were aware of inner speech during exercise (why 5% of folk do not is perhaps more curious than those folk that did), and noticed their inner speech to a greater degree when they were fatigued, when they wanted to terminate the exercise bout, and near the end of the exercise bout. Their inner speech was described most often to be phrases (such as ‘keep it up’ or ‘don’t stop’) rather than single words or sentences, and interestingly, they used the second person tense more frequently than the first person during exercise. The athletes perceived that they used inner speech for motivational purposes, maintaining drive and effort, maintaining focus and arousal, and to a lesser degree for cognitive functions such as ensuring correct race strategy, or using methods that would enhance their performance, such as breathing regularly. Helgo Schomer and colleagues did a great study where they got folk doing long Sunday runs to take walkie-talkies (the study was done in the 1980’s) and he would contact them randomly during the run and ask what they were thinking. While there will always be a degree of self-censorship of personal thoughts and inner discussion, he found that at lower running speeds, most inner speech was described as conversational chatter or problem solving social or work issues, and at higher speeds monitoring their body function, and the environment.

While all this work is excellent in describing what type of inner speech is ‘spoken’ at rest and during exercise, some of the best ‘deep’ theoretical work I have ever read in this field was generated by George Mead more than a hundred years ago, where he suggested that inner speech is a ‘soliloquy’ which occurs between at least two inner voices, rather than a single voice in one’s mind / brain. Mead defined these as an ‘I’ voice, representing the voice describing a current activity, or urging one to act, and a ‘Me’ voice, which takes the ‘perspective of the other’ and with which the ‘I’ voice is assessed. Mead also suggested that previous social interactions with other individuals allowed one to gain a viewpoint of oneself or one’s actions or thoughts, and therefore that ‘taking the perspective of the other’ is the ability to understand that another person’s viewpoint may be different to one’s own, and to use that opinion to change one’s own behaviour or viewpoint. Inner speech thus allows or creates the internalisation of this mechanism for taking another person’s perspective, as one can describe to a ‘real’ person (someone whom one has interacted with in the past that was significant to one), or an imagined person one has never previously interacted with, in one’s mind the reasons for behaving in a certain manner in a previous or ‘current’ situation, or how one is ‘feeling’ the effects of current activity, and the ‘Me’ voice takes the opinion of the other (and can be a conglomeration of many others, and be a ‘generalised other’) to assess the validity of how one says one is feeling. These concepts fit in well with the findings of Gammage and colleagues, who as described above, suggested that inner speech as mostly being reported as occurring in the second person tense (‘Me’), but with first person speech also occurring (which would be the ‘I’ voice), though why the ‘Me’ voice would be ‘heard’ more than the ‘I’ voice during exercise, if the findings of Gammage and colleagues occur in all athletes during all sporting events, is not clear.

A further fascinating hypothesis about inner speech was made by Morin and others, who suggested that inner speech was crucial for self-awareness (and one’s sense of self), by creating a time distance or ‘wedge’ between the ‘self’ and the mental or physical activities which the ‘self’ was currently experiencing. This time-wedge would enable retrospective analysis of the activity in which the individual was currently immersed in, thus facilitating the capacity for self-observation and thus both awareness of the ‘meaning’ of the activity and its effect on the individual, and self-awareness per se. In other words, if an individual was completely immersed in their current experience, they could not understand the meaning of the experience, because a time or perceptual gap is needed to create the time required to get enough ‘distance’ from the activity and assess and understand the meaning of an experience, and whether it is a threat to the individual if it continues. Inner speech therefore has been suggested to be the action that generates the time-wedge by creating a redundancy of self-information. This redundancy is the result of the difference between the actual physiological changes associated with the experience creating one unit of information about the event, and the descriptive ‘I’ inner speech creating a second (retrospective) unit of information of the same activity or event, separated from the first unit of information by a time-wedge. This time-wedge and redundancy of the same information allows retrospective comparison and analysis of the two different activities – the one in real time, and the other a retrospective copy, and a judgement is made of what is happening and how best to respond to it, by the ‘Me’ voice. This theory would suggest that all inner speech is retrospective, even the ‘I’ voice, and allows the retrospective analysis of an event in an ordered and structured way. Lonnie Athens, one of my all-time best creative thinkers, suggested 10 ‘rules’ that well describe all these complex inner speech processes described above: 1) People talk to themselves as if they are talking to someone else, except they talk in short hand; 2) When people talk to each other, they tell themselves at the same time what they are saying; 3) While people are talking to us, we have to tell ourselves what they are saying; 4) we always talk with an interlocutor when we soliloquise – the ‘phantom others’ (which is the ‘Me’ voice as described above); 5) The phantom community is the one and the many. However, we can normally only talk to one phantom at a time during our soliloquies; 6) Soliloquising transforms our raw, bodily sensations into perceived emotions. If it were not for our ability to soliloquise, we would not experience perceived emotions (like fatigue during exercise) in our existence. Instead, we wold only experience a steady stream of vague body sensations; 7) Our phantom others (the ‘Me’ voice) are the hidden sources of our perceived emotions. If we generate emotions by soliloquising about our body sensations, and if our phantom others play a critical role in our soliloquies, then our phantom other must largely shape the perceived emotion we generate; 8) Our phantom community (the ‘Me’ voice) occupies the centre stage of our life whether we are alone or with others. Talking to the phantom others about an experience we are undergoing is absolutely essential to understand its emergent meaning. Only in conversation with our phantom community do we determine its ultimate meaning; 9) Significant social experiences shape our phantom community (which are incorporated into our ‘Me’ voice); and 10) Given that some soliloquies are necessarily ‘multi-party’ dialogues, conflicts of opinion are always possible during inner speech soliloquies.

Relating all this fascinating theoretical work to an exercise bout therefore – as exercise continues, and physiological sensations change, these changes would be picked up by physiological sensors in the body and transferred to the brain, where they would be raised into our conscious mind by the ‘I’ voice, which already has a time-wedge to make sense of the raw feelings. Therefore, the athlete’s ‘I’ voice would say ‘I am tired’, and the ‘Me’ voice would respond to this assessment of the ‘I’ voice, based on their ‘perspective of the other’ viewpoint. The ‘Me’ Voice may be either positive in response (motivational – ‘keep going, the rewards will be worth it’) or negative (cognitive – ‘if you keep on going, you will damage yourself’). As the race or physical activity continued, as described above in the work of Kimberley Gammage and colleagues, athletes become more aware of their inner speech, probably because the symptoms of fatigue and distress described by the ‘I’ voice becomes more profound, and more persistent, and the ‘Me’ voice has to keep on responding to the more urgent and louder voice of the ‘I’ voice’, given that the ‘I’ voice is describing changes that have greater potential to be damaging to the athlete. It is likely that the relative input of each of the ‘I’ and ‘Me’ voices (and of course the subconscious processes that generate them) are either related to, or create, the temperament and personality of the individual, and their perception of success or failure in sport. For example, the ‘Me’ voice may suggest that it is not a problem to slow down when the ‘I’ voice indicates that the current speed the athlete is producing is too fast and may damage the athlete, if the familial, genetic or psychological history that created the ‘phantom others’ / ‘Me’ voice of the athlete perceived that winning sporting events to not be of particular importance. In contrast, the ‘Me’ voice may disagree with, and disapprove of, the desire of the ‘I’ voice to slow down, if the familial, genetic, psychological history of phantom others that make up the ‘Me’ voice believed that winning was very important, and slowing down a sign of personal failure and weakness. These relative viewpoints of the ‘Me’ voice will therefore likely shape the personality and self-esteem of the athlete (and indeed, all individuals), and whether they regard themselves a success or failure, if they try to keep on going and win, or try to keep on going and slow down due to having reached their physical body limits, which may not be congruent with the athletes psychological desires and demands. Furthermore, the ‘will’ of the athlete is probably to a large degree related to the forcefulness of the ‘Me’ voice in resisting the desire of the ‘I’ voice, or if the ‘I’ voice remains relatively silent even under times of duress or hardship, and is also likely created by the family history or genetic makeup of the athlete when creating the generalised phantom other / ‘Me’ voice. The relative input of both the ‘I’ and ‘Me’ components of an individual’s inner speech and the ‘viewpoint’ of the ‘Me’ voice may therefore be the link between the temperament and performance of an athlete, or may actually be part of or influence both.

In summary therefore, those tennis players with their overt inner speech (usually accompanied by fist pumping or smashed rackets depending on its positive or negative nature) open a window for us to understand one of the most potentially crucial and amazingly complex constituents of the perceptual loop of how sensations generated by the body under stress are changed into emotions that we ‘feel’ and respond to, which both explains to us how our body is feeling, behaving and ‘doing’ by the vocalisation of an ‘I’ voice, and at the same time creates our sense of self as a result of how the dialogue responds to this explanation, vocalised as inner speech, through our ‘Me’ voice, which is both reflective and created by the phantom others which shape us and regulate us. However, the inner voices can be our worst enemies, if they are too strong, or too harsh, or too demanding on us, and if so, they are probably produced by a damaged childhood with over-demanding parents, coaches, or teasing peer children which make us feel like what we are doing is never ‘enough’, even of course though what ‘enough’ is will always be a relative thing, and different for every different person on earth. Some Sport Psychologists have tried to improve sporting performance of athletes they work with by altering the content and nature of their inner speech, though Lonnie Athens made the relevant point that if one’s inner speech was too changeable, one’s sense of self would be fluid and not permanent, which in most folk it seems to be, and that only extreme psychological trauma, such as assault, divorce, near death or death of a loved one, where a state of existence is created which the ‘Me’ voice has no frame of reference, will allow the ‘Me’ voice be changed, and of course, it may change from a positive or neutral to a more negative ‘commentary state’. Having said that, my own inner voices have changed subtly as I have aged, and are (fortunately) more tolerant and forgiving as compared to what they were like in my youth. Often when doing sport, in contrast to when I was young when my ‘Me’ voice was insistent I keep going or be a failure, my inner voices now I am in my fifties often encourage me to slow down and look after myself, now that my body is old, less efficient, and damaged by the excesses of sport and wilful behaviour of my youth. So clearly there is some capacity to change and maintain one’s sense of self. Having said that, my sense of self is also subtly different from what it was in my youth, so this may be related to the changes in the make-up of my inner voices (and their underlying subconscious control mechanisms, perhaps due to the desires of my youth mostly being fulfilled in my life to date), or may not be related to them at all. More research work is needed for us to better understand all these concepts and mental activities that are continuously active in our mind and brain.

At this point in time our inner speech is the only real-time window we have into our subconscious, and is both ‘ourselves’ (as hard a concept this is to understand and accept) and our continuous companion through each minute of each day of our life. Often one wishes to turn off one’s inner voices, and interestingly some drugs do seem reduce the amount of ‘heard’ inner voices, but this does open up the philosophical challenge of whether if one has no inner speech, whether one will be aware that one is conscious, or aware of one’s current state of being. My inner voiced has been ‘shouting at me’ during the last two paragraphs of writing this, telling me I am tired and hungry, and it’s time to stop writing for the day and go in from my garden shed home working office to spend time with the family, and get some food and drink to replenish my energy levels. While I resisted their siren tune until completing this piece of writing, now it’s done, I will bow to my inner voices incessant request and sign off and head in for some welcome rest and relaxation. Of course I know that after a short period of relaxing, my inner speech will be chattering at me again, telling me to go back to my garden shed office and check the grammar and spelling of this article, and start preparing for the next. There is no peace for the wicked, particularly from our ever present, and ever demanding, inner voices!


Plato’s Horse And The Concept of Universals – Can You Have Life As We Know It Without Rules That Govern It

My young daughter’s precious Labrador puppy, Violet, has grown up, and recently turned two. As a family, we usually traditionally have Schnauzers as pets, and it’s been strange but nice to have a different breed around the home, and Labradors have great personalities. What struck me forcibly when Violet came in to our life, was that while she has a very different shape and form to our two Schnauzers, she is as instantly recognisable as a dog, as all our dogs are. It again struck me when the family watched a program on the most loved one hundred dog breeds in the UK (for those interested Labradors came in first), that while each of the different breeds had very different characteristics – think Chihuahua as compared to a Great Dane, or a Pug compared to a German Shepherd – they are were all instantly recognisable by our family members watching it, and I am sure just about all the folk who watched the program, as being dogs rather than cats, or lamas, or sheep. In the last few years of my academic career (and perhaps at a subconscious level for my entire research career), after having been an Integrative Control Systems scientist for most of my career, trying to understand how our different body systems and functions are controlled and maintained in an optimal and safe manner, I have come to understand, and have been exploring the concept, along with great collaborators Dr Jeroen Swart and Professor Ross Tucker, that perhaps general rules are operating across all body system control mechanisms, whatever their shape or form, and we recently published a theoretical research paper which described our thoughts. In my new role as Deputy Dean of Research at the University of Essex, I am fortunate to be working with the Department of Mathematics, helping them enhance their research from an organisational perspective, and it has been fascinating working with these supremely bright folk and seeing the work they do, and having it reiterated to me that even simple mathematical principles are abstract, and not grounded in anything in the physical world (for example knowing that 1 plus 2 equals 3 does not need any physical activity for it to be always true). All of these recent activities have got me thinking of the long pondered issue of universals, their relationship to rules and regulation governing and maintaining life, and which came first, the rules, or the physical activity that requires rules and regulation to be maintained in order for the physical activity to continue and be both organised and productive.

Universals are defined as a class of mind-independent (indeed human-independent) entities which are usually contrasted with individuals (also known as ‘particulars’ relative to ‘universals’), and which are believed to ‘ground’ and explain the relation of qualitative identity and resemblance among all individuals. More simply, they are defined as the nature or essence of a group of like objects described by a general term. For example, in the case of dogs described above, when we see a dog, whether it is a Labrador, Schnauzer, German Shepherd, or a myriad of other breeds, we ‘know’ it to be a dog, and the word dog is used to cover the ‘essence’ of all these and other breeds. Similarly, we know what a cat is, or a house, or shoes, despite each of these ‘types of things’ often looking very different to each other – there are clearly enough characteristics in each to define them by a universal defining name. Understanding universals gets even more complex though than merely thinking of them as being just a name or group of properties for a species or ‘type of thing’. Long ago, back in the time of antiquity, one of the first recorded philosophical debates was about universals, and whether they existed independently as abstract entities, or only as a term to define an object, species or ‘type of thing’. Plato suggested that a universal exists independent of that which they define, and are the true ‘things which we know’, even if they are intangible and immeasurable, with the living examples of them being copies and / or imitations of the universal, each varying slightly from the original universal, but bound in their form by the properties defined by the universal. In other words, he suggested that universals are the ‘maps’ of structures or forms which exist as we see and know them, for example a dog, or a horse, or a tree, and they exist in an intangible state somehow in the ‘ether’ around us, ‘directing’ the creation of the physical entities in some way which we have not determined or are currently capable of understanding.

In philosophical terms, this theory of universals as independent entities is known as Platonic Realism. After Plato came Aristotle, who felt that universals are ‘real entities’, like Plato perceived them to be, but in his theory (known as Aristotelian Realism), he suggested that universals did not exist independent of the particulars, or species, or ‘things’ they defined, and were linked to their physical existence, and would not exist without the physical entities they ‘represent’. In contrast to realism, Nominalism is a theory that developed after the work of these two geniuses (Nominalists are also sometimes described as Empiricists) which denied the existence of universals completely, and suggested that physical ‘things’, or particulars, shared only a name, and not qualities that defined them, and that universals were not necessary for the existence of species or ‘things’. Idealism (proposed by folk like Immanuel Kant) got around the problem of universals by suggesting that universals were not real, but rather were products of the minds of ‘rational individuals’, when thinking of that which they were looking at.

This dilemma of both the existence and nature of universals has to date not been solved, or adequately explained, given that it is impossible with current scientific techniques, or perhaps psychological ‘power’ in our minds, to be able to prove or disprove the presence of universals, and folk ‘believe’ in one of these different choices of universals depending on their world and life points of view. Religious folk would suggest that the world is created in God’s ‘image’, and to them God’s ‘images’ would be the universals from which all ‘God’s creatures’ are created. In contrast, with respect to evolution, which is diametrically opposed to the concept of religion, it is difficult to believe in both evolution and the presence of universals, as evolution is based on the concept of need and error-driven individual genetic changes over millennia in response to that need, which led to different species developing, and to the variety in nature and life we see all around us. In the evolutionary model therefore, the concept of universals (and the creation of the world by a God as posited by many religions) would appear to be counter-intuitive.

While a lot of debate has focused on ‘things one can see’ as the physical ‘particulars’ which are either a product of universals or not, there are more abstract activities which support the existence of universals independent of the mind or ‘things that they are involved with’. For example, the work done by Ross, Jeroen and myself developed from the realisation that a core principle of all physiological activity is homeostasis, which is defined as the maintenance by regulatory control processes or structures of physiological or physical activity within certain tolerable limits, in order to protect the individual or thing being regulated from being damaged, or damaging itself. Underpinning all homeostatic control mechanisms is the negative feedback loop, where when a substance or activity increases or decreases too much, initiates other activity as part of a circular control structure which has the capacity to act on the substance requiring control, and normalises or attenuates the changes, and keeps the activity or behaviour within required ‘safe levels’, which are set by homeostatic control mechanisms. The fascinating thing is that the same principle of negative feedback control loops occurs in all and any physical living system, and without it life could not occur. Whether gene activity, liver function, or whole body activity, all which have very different physical or metabolic regulatory structures and processes, all are controlled by negative feedback loop principles. Therefore, it is difficult not to perceive that the negative feedback loop is a type of universal, but one that works by similar ‘action’ across systems rather than ‘creating’ a physical thing in its likeness. Mathematics is another area in which folk believe universals are ‘at work’, given that even the simplest sums, such as one plus two equals three, needs no physical structure or ‘particular’ for them to always be such, and true. While we all use mathematical principles on a continuous basis, it is difficult to believe that such mathematical principles do not ‘exist’ in the absence of humans, or any physical shape or forms.

So where does all this leave us in understanding universals and their relevance to life as we know it? Perhaps what one’s viewpoint is regarding the existence of universals depends on one’s own particular epistemological perspective (understanding of the nature of knowledge and how it is related to one’s justified beliefs) and world view. Though I can in no way prove it, I believe in universals and would define myself as a Platonic Realist. This viewpoint comes from a career in science and working with exceptional scientists like Jeroen Swart and Ross Tucker getting to understand the exquisite and universal nature of control mechanisms which keep our bodies working the way which they do. However, I do not believe in any God or religion in any shape or form, and have greater faith in the evolutionary model, which is counter-intuitive relative to my belief in the presence of independent universals. Therefore, the potential similarities and differences between religion and universals, and evolution and universals described above is clearly redundant for my specific beliefs, and there is probably similar confusion in core beliefs for many (particularly research involved) folk. However, it is exciting to think (at least for me) that there may be universals out there that have no link to current activities or functions or species, and which may become evident to humans at some point in the future, by way of the development of new species or new ‘things’. Having said that, I guess it could be argued that if universals do not exist, progress and the evolution of ideas will lead us to new developments, species or ways of life in an evolution-driven, error-associated way. One cannot ‘see’ or ‘feel’ a negative feedback loop, or a maths algorithm, or universal for even something as simple as a dog, which is why perhaps to a lot of folk with a different epistemological viewpoint to mine it is challenging to accept the presence, or indeed the necessity, of and for universals. But when I look at our Labrador, and ‘know’ as such it is a dog as much as a Schnauzer, Chihuahua, or German Shepherd is, I feel sure there is the Universal dog out there somewhere in the ether that will perhaps keep my toes warm when I leave this world for the great wide void which may exist beyond it. And surely, given what a stunning breed they are, the Universal dog, if it exists out there, can only be a Labrador. Or perhap a Schnauzer!


Homeostasis And The Constancy Principle – We Are All Creatures Of Comfort Even When We Go Out Of Our Comfort Zone

It is autumn in our part of the world, and the first chills are in the air in the late evening and early morning, and the family discussed last night the need to get out our warm clothes from storage in readiness for the approaching winter, in order to be well prepared for its arrival. After sharing in the fun of Easter Sunday yesterday and eating some chocolate eggs with the children, a persistent voice in my head this morning instructed me to eat less than normal today to ‘make up’ for this out of the normal chocolate eating yesterday. It is a beautiful sunny day outside as I write this, and I feel a strong ‘urge’ to stop writing and go out on a long cycle ride because of it, and have to ‘will’ these thoughts away and continue writing, which is my routine activity at this time of morning. After a recent health scare I have been checking on my own physical parameters with more care than normal, and found it interesting when checking what ‘normal’ values for healthy folk are, that most healthy folk have fairly similar values for things like blood glucose, blood pressure, cholesterol concentrations and other such parameters, and that there are fairly tight ranges of values of each of these which are considered normal and a sign of ‘health’, and if one’s values are outside of these, it is a sign of something wrong in the working of your body that needs to be treated and brought back into the normal range either by lifestyle changes, medication, or surgical procedures. All of these got me thinking about the regulatory processes that ensure that the body maintains its working ‘parts’ in a similar range in all folk, and the concept of homeostasis, which as a regulatory principle explains and underpins the maintenance of this ‘safe zone’ for our body’s multiple activities, including the sensing of any external or internal changes which could be associated with the potential for one of the variables to go out of the ‘safe zone’, and initiates changes either behaviourally or physiologically which attempt to bring the variable at risk back into the ‘safe zone’ either pre-emptively or reactively.

Homeostasis is defined scientifically as the tendency towards a relatively stable equilibrium between inter-dependent elements. The word was generated from the Greek concepts of ‘homiois’ (similar) and ‘stasis’ (standing still), creating the concept of ‘staying the same’. Put simply, homeostasis is the property of a system whereby it attempts to maintain itself in a stable, constant condition, and resists any changes or actions on the system which may change or destabilize the stable state. It’s origins as a concept were from the ancient Greeks, with Empedocles in around 400 BC suggesting that all matter consisted of elements which were in ‘dynamic opposition’ or ‘alliance’ with each other, and that balance or ‘harmony’ of all these elements was necessary for the survival of the individual or organism. Around the same time, Hippocrates suggested that health was a result of the ‘harmonious’ balance of the body’s elements, and illness due to ‘disharmony’ of the elements which it was made up of. Modern development of this concept was initiated by Claude Bernard in the 1870’s, who suggested that the stability of the body’s internal environment was ‘necessary for a free and independent life’ and that ‘external variations are at every instant compensated for and brought into balance’, and Walter Cannon in the 1920’s first formally called this concept of ‘staying the same’ homeostasis. Claude Bernard actually initially used the word ‘constancy’ rather than homeostasis to describe the concept, and interestingly, a lot of Sigmund Freud’s basic work on human psychology was based on the need for ‘constancy’ (though he did not cross-reference this more physiological / physical work and concepts), and that everyone’s basic needs were for psychological constancy or ‘peace’, and when one had an ‘itch to scratch’ one would do anything possible to remove the ‘itch’ (whether it be a new partner, a better house, an improved social status, or greater social dominance, amongst other potentially unrequited desires), and further that one’s ‘muscles are the conduit through which the ego imposes its will upon the world’. He and other psychologists of his era suggested that if an ‘itch’, urge or desire was not assuaged (and what causes these urges, whether a feeling of inadequacy, or previous trauma, or a desire for ‘wholeness’, is still controversial and still not clearly elicited even today), the individual would remain out of their required ‘zone of constancy’, and would feel negative emotions such as anxiety, irritation or anger until the urge or desire was relieved. If it was not relieved for a prolonged period this unrequited ‘itch’ could lead to the development of a complex, projection or psychological breakdown (such as depression, mania, anxiety, personality disorder or frank psychosis). Therefore, as much as there are physical homeostasis related requirements, there are potentially also similarly psychological homeostasis related requirements which are being reacted to by the brain and body on a continuous basis.

Any system operating using homeostatic principles (and all our body systems do so) has setpoint levels for whatever substance or process is being regulated in the system, and boundary conditions for the substance or process which are rigidly maintained and cannot be exceeded without a response occurring which would attempt to bring the activity or changes to the substance or process back to the predetermined setpoint levels or within the boundary conditions for them. The reasons for having these set boundary conditions are protective, in that if they were exceeded, the expectation would be the system would be damaged if the substance or process being regulated (for example, oxygen, glucose, sodium, temperature, cholesterol, or blood pressure, amongst a whole host of others) was used up too quickly or worked too hard, or was allowed to build up to toxic / extremely high levels or not used enough to produce life-supporting substrates or useable fuels, which would endanger the life and potential for continued activity of the system being monitored. For example, oxygen shortage results in death fairly quickly, as would glucose shortage, while glucose excess (known as diabetes) can also result in cellular and organ damage, and ultimately death if it is not controlled properly. In order for any system to maintain the substance or process within homeostasis-related acceptable limits, three regulatory factors (which are all components of what is known as a negative feedback loop) are required to be components of the system. The first is the presence of a sensory apparatus that can detect either changes in whatever substance or process is being monitored, or changes in the internal or external environment or other systems which interact with or impact on the substance or process being monitored. The second is a control structure or process which would be sent the information from the sensory apparatus, and would be able to make a decision regarding whether to respond to the information or to ignore it as not relevant. The third is an ‘effector’ mechanism or process which would receive commands from the control structure after it had made a decision to initiate a response in response to the sensed perturbation potentially affecting the system it controls, and make the changes to the system decided upon by the control structure in order to maintain or return the perturbed system to its setpoint value range.

The example of temperature regulation demonstrates both the complexity and beauty of homeostasis in regulating activity and protecting us on a continuous basis from harm. Physiological systems in most species of animals are particularly sensitive to changes in temperature and operate best in a relatively narrow ranges of temperature, although in some species a wider range of temperatures is tolerated. There are two broad mechanisms used by different organisms to control their internal temperature, namely ectothermic and endothermic regulation. Ectothermic temperature regulators (also known as ‘cold-blooded’ species), such as the frog, snake, and lizard, do not use many internal body processes to maintain temperature in the range which is acceptable for their survival, but rather use external, environmental heat sources to regulate their body temperature. If the temperature is colder, they will use the sun to heat themselves up, and if warm, they will look for shadier conditions. Ectotherms therefore have energy efficient mechanisms of maintaining temperature homeostasis, but are more susceptible to vagaries in environmental conditions compared to endotherms. In contrast, endotherms (also known as ‘warm-blooded’ species), into which classification humans fall, use internal body activity and functions to either generate heat in cold environments or reduce heat in warm conditions. In endotherms, if the external environment is too cold, and if the cold environment impacts on body temperature, temperature receptors measuring either surface skin temperature or core body temperature will send signals to the brain, which subsequently initiates a shiver response in the muscles, which increases metabolic rate and provides greater body warmth as a by-product of fuel / energy breakdown and use. If environmental temperature is too warm, of if skin or core temperature is too high, receptors will send signals to brain areas which initiates a chain of events involving different nerve and blood-related control processes which result in increased blood flow to the skin by vasodilatation, thereby increasing blood cooling capacity and sweat rate from the skin, thus producing cooling by water evaporation. All these endotherm associated heating and cooling processes utilize a large amount of energy, so from an energy perspective are not as efficient as that of ectotherms, but they do allow a greater independence from environmental fluctuations in temperature. It must be noted that endotherms also use similar behavioural techniques to ectotherms, such as moving into shady or cool environments if excessively hot, but as described above, can tolerate a greater range of environmental temperatures and conditions. Furthermore, humans are capable of ‘high level’ behavioural changes such as putting on or taking off clothes, in either a reactive or anticipatory way. It is evident therefore that for each variable being homeostatically monitored and managed (on a continuous basis) there are a complex array of responsive (and ‘higher-level’ pre-emptive) options available with which to counteract the potential or actual ‘movement’ of the variable beyond its ‘allowed’ metabolic setpoints and ranges.

There are a number of questions still to be answered regarding how homeostasis ‘works’ and how ‘decisions’ related to homeostasis occur. It is not clear how the regulatory mechanisms know which variable they ‘choose’ to defend as a priority. Brain oxygen would surely be the most important variable to ‘defend’, as would perhaps blood glucose levels, but how decisions are made and responses initiated for these variables preferentially, which may impact negatively on other systems with their own homeostatic requirements, is not clear. Furthermore, there is the capacity for ‘conflict’ between physical and psychological homeostatic mechanisms when homeostatic-related decisions are required to be made. For example, one’s ego may require one to run a marathon to fulfill a need to ‘show’ one’s peers that one is ‘tough’ by completing such a challenging goal, but doing so (running the marathon) creates major physical stress for and on the physical body. Indeed, some folk push themselves so hard during marathons that they collapse, even if they ‘feel’ warning signs of impending collapse, or of an impending heart attack, and choose to keep running despite these symptoms. To these folk, the psychological need to complete the event must be greater than the physical need to protect themselves from harm, and their regulatory decision-making processes clearly valences psychological homeostasis to be of greater importance than physiological homeostasis when deciding to continue exercising in the presence of such warning symptoms. However, running a marathon, while increasing physical risk of catastrophic physical events during the running of it, if done on a repetitive basis has positive physical benefits, such as weight loss and increased metabolic efficiency of the heart, lungs, muscles and other organ structures, along with enhanced psychological well-being which would be derived from achieving the set athletic performance-related goals. Therefore, ‘decision-making’ on an issue such as running a marathon is complex from a homeostasis perspective, with both short and long term potential benefits and harmful consequences. How these contradictory requirements and factors are ‘decided upon’ by the brain when attempting to maintain both psychological and physical homeostasis is still not clear.

A further challenge to homeostatic regulation is evident in the examples of when one has a fever, where a high temperature may paradoxically be beneficial, and after a heart attack, where an altered heart rate and blood pressure setpoint may be part of compensatory mechanisms to ensure the optimal function of a failing heart. While these altered values are potentially ‘outside’ of the ‘healthy’ setpoint level range, they may have utilitarian value and would be metabolically appropriate in relation to either a fever or failing heart. How the regulatory homeostatic control mechanisms ‘know’ that these altered metabolic setpoints are beneficial rather than harmful, and ‘accepts’ them as temporary or permanent new setpoints, or whether these altered values are associated with routine homeostatic corrective responses which are part of the body’s ongoing attempt to induce healing in the presence of fever or heart failure (amongst other homeostatically paradoxical examples), is still not clear. Whether homeostasis as a principle extends beyond merely controlling our body’s activity and behaviour, to more general societal or environmental control, is also still controversial. For example, James Lovelock, with his Gaia hypothesis, has suggested that the world in its entirety is regulated by homeostatic principles, and global temperature increases result in compensatory changes on the earth and in the atmosphere that lead to eventual cooling of the earth, and this warming and cooling continues in a cyclical manner – and most folk who believe in global warming as a contemporary unique catastrophic event don’t like this theory, even if it is difficult to support or refute without measuring temperature changes accurately over millennia.

Homeostatic control mechanisms can fail, and indeed our deaths are sometimes suggested to be the result of a failure of homeostasis. For example, cancer cells overwhelm cellular homeostatic protective mechanisms, or develop rapidly due to uncontrolled cellular proliferation of abnormal cells which are not inhibited by the regular cellular homeostatic negative feedback control mechanisms, which lead to physical damage to the body and ultimately our death, for these or other reasons that we are still not aware of. In contrast, Sigmund Freud, in his always contrary view of life, suggested as part of his Thanatos theory that death in the ultimate form of ‘rest’ and is our ‘baseline’ constancy-related resting state which we ‘go back to’ when dying (with suicide being a direct ‘mechanism’ of reaching this state in those whose psyche are operating too far away from their psychological setpoints, whatever these are), although again this is a difficult theory to either prove or disprove. Finally, what is challenging to a lot of folk about homeostasis from a control / regulatory perspective is that it is a conceptual ‘entity’ rather than a physical process that one can ‘show’ to be ‘real’, much like Plato’s Universals (to Plato the physical cow itself was less relevant than the ‘concept’ of a cow, and he suggested that one can only have ‘mere opinions’ of the former, while one has absolute knowledge of the latter, given the physical cow changes as it grows, ages, and dies, while the ‘concept’ of a cow is immutable and eternal). It is always difficult scientifically to provide categorical evidence which either refutes or support concepts such as universals and non-physical general control theories, even if they are concepts which appear to underpin all life as we know it, and without which function we could not exist in our current physical form and living environment.

As I look out the window at the falling autumn leaves and wonder whether we will have a very cold winter this year and whether we have prepared adequately for it clothes-wise (pre-emptive long-term homeostatic planning at its best, even if perhaps a bit ‘over-the-top’), while taking off my jersey as I write this given that the temperature has increased as the day has changed from morning to afternoon (surely a reactive homeostatic response), and as I ponder my health-related parameters, and work out how I am going to get those that need improvement as close to ‘normal’ as possible (surely as part of behavioural homeostatic / health-optimization planning), I look forward to that bike ride now I have managed to delay gratification of doing so until I have completed writing this (and feel a sense of well-being both from doing so and by realizing I am now ‘free’ to go on the ride and by doing so can remove the psychological ‘itch’ that makes me want to do it and therefore return to a state of psychological ‘constancy’ / homeostasis). Contemplating all of these, it is astonishing to think that all of what I, and pretty much all folk, do is underpinned by a desire to be, and maintain life, in a ‘comfort zone’ which feels right for me, and which is best for my bodily functions and psychological state. Given that all folk in the world have similar physical parameters when we measure them clinically, it is likely that our ‘comfort zones’ both physically and psychologically are not that different in the end. Perhaps the relative weighting which each of us assigns to our psychological or physical ‘needs’ create minor differences between us (and occasionally major differences such as in folk with psychopathology or with those who have significant lifestyle related physical disorders), though at the ‘heart of it all’, both psychologically and physically, is surely the over-arching principle of homeostasis. While on the bike this afternoon, I’ll ponder on the big questions related to homeostasis which still need to be answered, such as how homeostasis-related decisions are made, how the same principle can regulate not just our body, but also our behaviour, and perhaps that of societal and even planetary function, and how ‘universals’ originated and which came first, the physical entity or the universal. Sadly I think it will need a very long ride to solve these unanswered questions, and remove the ‘itch that needs scratching’ which arises from thinking of these concepts as a scientist who wants to solve them – and I don’t like to spend too long out of my comfort zone, which is multi-factorial and not purely bike-focused, but rather is part bike, part desk, part comfy chair, the latter of which will surely become more attractive after a few hours of cycling, and will ‘call me home’ to my next ‘comfort zone’, probably long before I can solve any of these complex issues while out on the ride watching the autumn leaves fall under a beautiful warm blue sky, with my winter cycling jacket unused but packed in my bike’s carrier bag in case of a change in the weather.


Contemporary Medical Training And Societal Medical Requirements – How Does One Balance The Manifest Need for General Practitioners With Modern Super-Specialist Trends

For all of my career, since starting as a medical student at the University of Cape Town as an 18 year old fresh out of school many years ago, I have been involved in the medical and health provision and training world, and have had a wonderful career first as a clinician, then as a research scientist, then in the last number of years managing and leading health science and medical school research and training. Because of this background and career, I have always pondered long and hard about what makes a good clinician, what is the best training to make a good clinician, how we define what a ‘good’ clinician is, and how we best align the skills of the clinicians we train with the needs and requirements of the country’s social and health environments in which they trained. A few weeks ago I had a health scare which was treated rapidly and successfully by a super-specialist cardiologist, and I was home the next day after the intervention, and ‘hale and hearty’ a few days after the procedure. If I had lived 50 years ago, and it had happened then, in the absence of modern high-tech equipment and the super-specialists skills, I would probably have died a slow and uncomfortable death treated with drugs of doubtful efficacy that would not have much benefited me much, let alone treat the condition I was suffering from. Conversely, despite my great respect for these super-specialist skills which helped me so successfully a few weeks ago, it has become increasingly obvious that this great success in clinical specialist training has come at the cost of reduced emphasis on general practitioner-focused training, and a reduction in the number of medical students choosing general practitioner work as a career after they qualify, which has caused problems both to clinical service delivery in a number of countries, particularly in rural areas of countries, and paradoxically put greater strain on specialist services despite their pre-eminence in contemporary clinical practice in most countries around the world. My own experience with grappling with this problem of how to increase general practitioners as an outcome of our training programs, as a Head of School of Medicine previously, and this recent health scare which was treated so successfully by super-specialist intervention, got me thinking of how best we can manage the contradictory requirements of the need for both general practitioners and specialists in contemporary society, and whether this conundrum should be best managed by medical schools, health and hospital management boards, or government-led strategic development planning initiatives.

It is perhaps not surprising, given the exponential development of technological innovations that originated in the industrial revolution and which changed how we live, that medical work also changed and became more technologically focused, which in turn required both increased time and increased specialization of clinical training to utilize these developing technologies, such as surgical, radiology investigative and laboratory-based diagnostic techniques. The hospital (Groote Schuur) and medical school (University of Cape Town) where I was trained was famous for the achievements of Professor Chris Barnard and his team’s work performing the first heart transplant there, using a host of advanced surgical techniques, heart-lung machines to keep the patients alive without a heart for a brief period of time, and state-of-the-art immunotherapy techniques to resist heart rejection, all specialist techniques he and his team took many years to master in some great medical schools and hospitals in the USA. Perhaps in part because of this, our training was very ‘high-tech’, consisting of early years spent learning basic anatomy, physiology and pathology-based science, and then later years spent in surgical, medical, and other clinical specialty wards, mostly watching and learning from observation of clinical specialists going about their business treating patients. If I remember it correctly, there were only a few weeks of community-based clinical educational learning, very little integrative ‘holistic’ patient-based learning, and almost no ‘soft-skill’ training, such as optimal communication with patients, working as part of a team with other health care workers such as nurses and physiotherapists, or learning to help patients in their daily home environment and social infrastructure. There was also almost no training whatsoever in the benefits of ‘exercise as medicine’, or of the concept of wellness (where one focuses on keeping folk healthy before they get ill, rather than dealing with the consequences of illness). This type of ‘specialist-focused’ training was common, particularly in Western countries, for most of the last fifty or so years, and as a typical product of this specialist training system, for example, I chose first clinical research and then basic research rather than more patient-focused work as my career choice, and a number of my colleagues from my University of Cape Town medical training class of 1990 have had superb careers as super-specialists in top clinical institutions and hospitals all around the world.

This increasing specialization of clinical training and practice, such as the example of my own medical training described above, has unfortunately had a negative impact both on general practitioner numbers and primary care capacity. A general practitioner (GP) is defined as a medical doctor who treats acute and chronic illnesses and provides preventative care and health education to patients, and who has a holistic approach to clinical practice that takes all of biological, social and psychological factors into consideration when treating patients. Primary care is defined as the day-to-day healthcare of patients and communities, with the primary care providers (GP’s, nurses, health associates or social workers, amongst others) usually being the first contact point for patients, referring patients on to specialist care (in secondary or tertiary care hospitals), and coordinating and managing the long term treatment of patient health after discharge from either secondary or tertiary care if it is needed. In the ‘old days’, GP’s used to work in their community often where they were born and raised, worked 24 hours a day as needed, and maintained their relationship with their patients through most or all of their lives. Unfortunately, for a variety of reasons, GP work has changed, and they now often work set hours, patients are rotated through different GP’s in a practice, and the number of graduating doctors choosing to be GP’s is diminishing, and there is an increasing shortage of GP’s in communities and particularly rural areas of most countries as a result. Sadly, GP work is often regarded as being of lower prestige than specialist work, the pay for GP’s has often been lower than that of specialists, and with the decreased absolute number of GPs, the work burden on many GP’s has increased (and paradoxically with computers and electronic facilities the note and recording taking requirements of GP’s appears to have increased rather than decreased) leading to increased level of burnout and GP’s choosing to turn to other clinical roles or to leave the medical profession completely, which exacerbates the GP shortage problem in a circular manner. Training of GP’s has also evolved into specialty-type training, with doctors having to spend 3-5 years ‘specializing’ as a GP (often today called Family Practitioners or Community Health Doctors), and this also has paradoxically potentially put folk off a GP career, and lengthens the time required before folk intent on becoming GP’s can do so and become board certified / capable of entering or starting a clinical GP practice. As the number of GP’s decrease, it means more folk go directly to hospital casualties as their first ‘port of call’ when ill, and this puts a greater burden on hospitals, which somewhat ironically also creates an increased burden on specialists, who mostly work in such hospitals, and who end up seeing more of these folk who could often be treated very capably by GP’s. This paradoxically allows specialists less time to do the specialist and super-specialist roles they spent so many years training for, with the result that waiting list and times for ‘cold’ (non-emergency) cases increases, and hospital patient care suffers due to patient volume overload.

At a number of levels of strategic management of medical training and physician supply planning, there have been moves to counter this super-specialist focus of training and to encourage folk to consider GP training as an appealing career option. The Royal College of Physicians and Surgeons of Canada produced a strategic clinical training document (known as the ‘CanMeds’ training charter) which emphasizes that rather than just training pure clinical skills, contemporary training of clinical doctors should aim to create graduates who are all of medical experts, communicators, collaborators, managers, health advocates, scholars and professionals – in other words a far more ‘gestalt’ and ‘holistically’ trained medical graduate. This CanMeds document has created ‘waves’ in the medical training community, and is now used by many medical schools around the world now as their training ‘template’. Timothy Smith, senior staff writer for the American Medical Association, published an interesting article recently where he suggested that similar changes were occurring in the top medical schools in the USA, with clinical training including earlier exposure to patient care, more focus on health systems and sciences (including wellness and ‘exercise is medicine’ programs), shorter time to training completion and increased emphasis on using new communication technologies more effectively as part of training. In my last role as Head of the School of Medicine at the University of the Free State, working with Faculty Dean Professor Gert Van Zyl, Medical Program Director Dr Lynette Van Der Merwe, Head of Family Medicine Professor Nathanial Mofolo, Professor Hanneke Brits, Dr Dirk Hagemeister, and a host of other great clinicians and administrators working at the University or the Free State Department of Health, the focus on the training program was shifted to try to include a greater degree of community based education as a ‘spine’ of training rather than as a two week block in isolation, along with a greater degree of inter-professional education (working with nurses, physiotherapists, and other allied health workers in teams as part of training to learn to treat a patient in their ‘entirety’ rather than as just a single clinical ‘problem’), and an increased training of ‘soft skills’ that would assist medical graduates not only with optimal long term patient care, but also with skills such as financial and business management capacity so that they would be able to run practices optimally, or at least know when to call in experts to assist them with non-clinical work requirements, amongst a host of other innovative changes. We, like many other Universities, also realized that it was important to try and recruit medical students from the local communities around the medical school in which they grew up, and to encourage as many of these locally based students as possible to apply for medical training, though of course selection of medical students is always a ‘hornets nest’, and it is very challenging to get it right balancing marks, essential skills and community needs of the many thousands of aspirant clinicians who wish to do medicine when so few places are available to offer them.

All of these medical training initiatives to try and initiate changes of what has become a potentially ‘skewed’ training system, as described above, are of course ‘straw in the wind’ without government backing and good strategic planning and communication by country-wide health boards, medical professional councils, and hospital administrators who manage staffing appointments and recruitment. As much as one needs to change the ‘focus’ and skills of medical graduates, the health structures of a country need to be similarly changed to be ‘focused’ on community needs and requirements, and aligned with the medical training program initiatives, for the changes to be beneficial and to succeed. Such training program changes and community based intervention initiatives have substantial associated costs which need to be funded, and therefore there is a large political component to both clinical training and health provision. In order to strategically improve the status quo, governments can choose to either encourage existing medical schools to increase student numbers and encourage statutory clinical training bodies to enact changes to the required medical curriculum to make it more GP focused, or build more medical schools to generate a greater number of potential GP’s. They can also pay GP’s higher salaries, particularly if they work in rural communities, or ensure better conditions of service and increased numbers of allied health practitioners and health assistants to lighten the stress placed on GP’s, in order to ensure that optimal community clinical facilities and health care provision is provided for. But, how this is enacted is always challenging, given that different political parties usually have different visions and strategies for health, and changes occur each time a new political party is elected, which often ‘hinders’ rather than ‘enacts’ required health-related legislation, or as in the case of contemporary USA politics, attempts to rescind previous change related healthcare acts if they were enacted by an opposition political party. There is also competition between Universities which have medical schools for increases in medical places in their programs (which result in more funding flowing in to the Universities if they take more students) and of course any University that wishes to open a new medical school (as my current employers, the University of Waikato wish too, and who have developed an exciting new community focused medical school strategic plan that fulfills all the criteria of what a contemporary focused GP training program should be, that will surely become an exemplary new medical school if their plan is approved by the government) is regarded as a competition for resources by those Universities who already run medical training programs and medical schools. Because of these competition-related and political issues, many major health-related change initiatives for both medical training programs and the related community and state structural training requirements are extremely challenging to enact, and are why so many planned changes become ‘bogged down’ by factional lobbying either before they start or when they are being enacted. This is often disastrous for health provision and training, as chaos ensues when a ‘half-changed’ system becomes ‘stuck’ or a new political regime or health authority attempts to impose further, often ‘half-baked’ changes on the already ‘half-changed’ system, which results in an almost unmanageable ‘mess’ which is sadly often the state of many countries medical training, physician supply, and health facilities, to the detriment both of patients and communities which they are meant to serve and support.

The way forward for clinical medical training and physician supply is therefore complex and fraught with challenges. But, having said this, it is clear that changes are needed, and brave folk with visionary thinking and strategic planning capacity are required to both create sound plans that integrate all the required changes across multiple sectors that are needed for the medical training changes to be able to occur, and to enact them in the presence of opposition and resistance, which is always the case in the highly politicized world of health and medical training. Two good examples of success stories in this field were the changes to the USA health and medical training system which occurred as a result of the Flexner report of 1910, which set out guidelines for medical training throughout the USA, which were actually enacted and came to fruition, and the development of the NHS system in the UK in the late 1940’s, which occurred as a result of the Beveridge report of 1942, which laid out how and why comprehensive, universal and free medical services were required in the UK, and how these were to be created and managed, and these recommendations were enacted by Clement Attlee, Aneurin Bevin and other members of the Labour government of that time. Both systems worked for a time, but sadly both in the USA and UK, due to multiple reasons and perhaps natural system entropy, both of these countries health services are currently in a state of relative ‘disrepair’, and it is obvious that major changes to them are again needed, and perhaps an entire fresh approach to healthcare provision and training similar to that initiated by the Flexner and Beveridge reports are required. However, it is challenging to see this happening in contemporary times with the polarized political status currently occurring in both countries, and strong and brave health leadership is surely required at this point in time in these countries, as always, in order to initiate the substantial strategic requirements which are required to either ‘fix’ each system or create an entirely new model of health provision and training. Each country in the world has different health provision models and medical training systems, which work with varying degrees of success. Cuba is an example of one country that has enacted wholesale GP training and community medicine as the centerpiece of both their training and health provision, though some folk would argue that they have gone too far in this regard in their training, as specialist provision and access is almost non-existent there. Therein lies an important ‘rub’ – clearly there is a need for more GP and community focused medical training. But equally, it is surely important that there is still a strong ‘flow’ of specialists and super-specialists to both train the GP’s in the specific skills of each different discipline of medicine, and to treat those diseases and disorders which require specialist-level technical skills. My own recent health scare exemplifies the ‘yin and yang’ of these conflicting but mutually beneficial / synergistic requirements. If it were not for the presence of a super-specialist with exceptional technical skills, I might not be alive today. Equally the first person I phoned when I noted concerning symptoms was not a super-specialist, but rather was my old friend and highly skilled GP colleague from my medical training days, Dr Chris Douie, who lives close by to us and who responded to my request for assistance immediately. Chris got the diagnosis spot on, recommended the exact appropriate intervention, and sent me on to the required super-specialist, and was there for me not just to give me a clinical diagnosis but also to provide pastoral care – in other words ‘hold my hand’ and show me the empathy that is so needed by any person when they have an unexpected medical crisis. In short, Chris was brilliant in everything he did as first ‘port of call’, and while I eventually required super-specialist treatment of the actual condition, in his role as GP (and friend) he provided that vital first phase support and diagnosis, and non-clinical empathic support, which is so needed by folk when they are ill (indeed historically the local GP was not just everyone’s doctor but also often their friend). My own example therefore emphasizes this dual requirement for both GP and specialist health provision and capacity.

Like most things, medical training and health care provision has like a pendulum ‘swung’ between specialist and generalist requirements and pressures in the last century. The contemporary perception, in an almost ‘back to the future’ way, is that we have perhaps become too focused on high technology clinical skills and training (though as above there will always be a place and need for these), and we need more of our doctors to be trained to be like their predecessors of many years ago, working out in the community, caring for their patients and creating an enduring life-long relationship with them, and dealing with their problems early and effectively before they become life-threatening and costly to treat and requiring the intervention of expensive specialist care. It’s an exciting period of potential world-wide changes in medical training and the clinical health provision to communities, and a great time to be involved in either developing the strategy for medical training and health provision and / or enacting it – if the folk involved in doing so are left in peace by the lobby groups, politicians and folk who want to maintain the current unbalanced status quo due to their own self-serving interests. Who knows, maybe even clinicians, like in the old days, will be paid again by their patients with a chicken, or a loaf of freshly baked bread, and goodwill will again be the bond between the community, the folk who live in them, and the doctors and healthcare workers that treat them. And for my old GP friend Chris Douie, who is surely the absolute positive example and role model of the type of doctor we need to be training, a chicken will heading his way soon from me, in lieu of payment for potentially saving my life, and for doing so in such a kind and empathetic way, as surely any GP worth his or her ‘salt’ would and should do!