Category Archives: Medicine

Contemporary Medical Training And Societal Medical Requirements – How Does One Balance The Manifest Need for General Practitioners With Modern Super-Specialist Trends

For all of my career, since starting as a medical student at the University of Cape Town as an 18 year old fresh out of school many years ago, I have been involved in the medical and health provision and training world, and have had a wonderful career first as a clinician, then as a research scientist, then in the last number of years managing and leading health science and medical school research and training. Because of this background and career, I have always pondered long and hard about what makes a good clinician, what is the best training to make a good clinician, how we define what a ‘good’ clinician is, and how we best align the skills of the clinicians we train with the needs and requirements of the country’s social and health environments in which they trained. A few weeks ago I had a health scare which was treated rapidly and successfully by a super-specialist cardiologist, and I was home the next day after the intervention, and ‘hale and hearty’ a few days after the procedure. If I had lived 50 years ago, and it had happened then, in the absence of modern high-tech equipment and the super-specialists skills, I would probably have died a slow and uncomfortable death treated with drugs of doubtful efficacy that would not have much benefited me much, let alone treat the condition I was suffering from. Conversely, despite my great respect for these super-specialist skills which helped me so successfully a few weeks ago, it has become increasingly obvious that this great success in clinical specialist training has come at the cost of reduced emphasis on general practitioner-focused training, and a reduction in the number of medical students choosing general practitioner work as a career after they qualify, which has caused problems both to clinical service delivery in a number of countries, particularly in rural areas of countries, and paradoxically put greater strain on specialist services despite their pre-eminence in contemporary clinical practice in most countries around the world. My own experience with grappling with this problem of how to increase general practitioners as an outcome of our training programs, as a Head of School of Medicine previously, and this recent health scare which was treated so successfully by super-specialist intervention, got me thinking of how best we can manage the contradictory requirements of the need for both general practitioners and specialists in contemporary society, and whether this conundrum should be best managed by medical schools, health and hospital management boards, or government-led strategic development planning initiatives.

It is perhaps not surprising, given the exponential development of technological innovations that originated in the industrial revolution and which changed how we live, that medical work also changed and became more technologically focused, which in turn required both increased time and increased specialization of clinical training to utilize these developing technologies, such as surgical, radiology investigative and laboratory-based diagnostic techniques. The hospital (Groote Schuur) and medical school (University of Cape Town) where I was trained was famous for the achievements of Professor Chris Barnard and his team’s work performing the first heart transplant there, using a host of advanced surgical techniques, heart-lung machines to keep the patients alive without a heart for a brief period of time, and state-of-the-art immunotherapy techniques to resist heart rejection, all specialist techniques he and his team took many years to master in some great medical schools and hospitals in the USA. Perhaps in part because of this, our training was very ‘high-tech’, consisting of early years spent learning basic anatomy, physiology and pathology-based science, and then later years spent in surgical, medical, and other clinical specialty wards, mostly watching and learning from observation of clinical specialists going about their business treating patients. If I remember it correctly, there were only a few weeks of community-based clinical educational learning, very little integrative ‘holistic’ patient-based learning, and almost no ‘soft-skill’ training, such as optimal communication with patients, working as part of a team with other health care workers such as nurses and physiotherapists, or learning to help patients in their daily home environment and social infrastructure. There was also almost no training whatsoever in the benefits of ‘exercise as medicine’, or of the concept of wellness (where one focuses on keeping folk healthy before they get ill, rather than dealing with the consequences of illness). This type of ‘specialist-focused’ training was common, particularly in Western countries, for most of the last fifty or so years, and as a typical product of this specialist training system, for example, I chose first clinical research and then basic research rather than more patient-focused work as my career choice, and a number of my colleagues from my University of Cape Town medical training class of 1990 have had superb careers as super-specialists in top clinical institutions and hospitals all around the world.

This increasing specialization of clinical training and practice, such as the example of my own medical training described above, has unfortunately had a negative impact both on general practitioner numbers and primary care capacity. A general practitioner (GP) is defined as a medical doctor who treats acute and chronic illnesses and provides preventative care and health education to patients, and who has a holistic approach to clinical practice that takes all of biological, social and psychological factors into consideration when treating patients. Primary care is defined as the day-to-day healthcare of patients and communities, with the primary care providers (GP’s, nurses, health associates or social workers, amongst others) usually being the first contact point for patients, referring patients on to specialist care (in secondary or tertiary care hospitals), and coordinating and managing the long term treatment of patient health after discharge from either secondary or tertiary care if it is needed. In the ‘old days’, GP’s used to work in their community often where they were born and raised, worked 24 hours a day as needed, and maintained their relationship with their patients through most or all of their lives. Unfortunately, for a variety of reasons, GP work has changed, and they now often work set hours, patients are rotated through different GP’s in a practice, and the number of graduating doctors choosing to be GP’s is diminishing, and there is an increasing shortage of GP’s in communities and particularly rural areas of most countries as a result. Sadly, GP work is often regarded as being of lower prestige than specialist work, the pay for GP’s has often been lower than that of specialists, and with the decreased absolute number of GPs, the work burden on many GP’s has increased (and paradoxically with computers and electronic facilities the note and recording taking requirements of GP’s appears to have increased rather than decreased) leading to increased level of burnout and GP’s choosing to turn to other clinical roles or to leave the medical profession completely, which exacerbates the GP shortage problem in a circular manner. Training of GP’s has also evolved into specialty-type training, with doctors having to spend 3-5 years ‘specializing’ as a GP (often today called Family Practitioners or Community Health Doctors), and this also has paradoxically potentially put folk off a GP career, and lengthens the time required before folk intent on becoming GP’s can do so and become board certified / capable of entering or starting a clinical GP practice. As the number of GP’s decrease, it means more folk go directly to hospital casualties as their first ‘port of call’ when ill, and this puts a greater burden on hospitals, which somewhat ironically also creates an increased burden on specialists, who mostly work in such hospitals, and who end up seeing more of these folk who could often be treated very capably by GP’s. This paradoxically allows specialists less time to do the specialist and super-specialist roles they spent so many years training for, with the result that waiting list and times for ‘cold’ (non-emergency) cases increases, and hospital patient care suffers due to patient volume overload.

At a number of levels of strategic management of medical training and physician supply planning, there have been moves to counter this super-specialist focus of training and to encourage folk to consider GP training as an appealing career option. The Royal College of Physicians and Surgeons of Canada produced a strategic clinical training document (known as the ‘CanMeds’ training charter) which emphasizes that rather than just training pure clinical skills, contemporary training of clinical doctors should aim to create graduates who are all of medical experts, communicators, collaborators, managers, health advocates, scholars and professionals – in other words a far more ‘gestalt’ and ‘holistically’ trained medical graduate. This CanMeds document has created ‘waves’ in the medical training community, and is now used by many medical schools around the world now as their training ‘template’. Timothy Smith, senior staff writer for the American Medical Association, published an interesting article recently where he suggested that similar changes were occurring in the top medical schools in the USA, with clinical training including earlier exposure to patient care, more focus on health systems and sciences (including wellness and ‘exercise is medicine’ programs), shorter time to training completion and increased emphasis on using new communication technologies more effectively as part of training. In my last role as Head of the School of Medicine at the University of the Free State, working with Faculty Dean Professor Gert Van Zyl, Medical Program Director Dr Lynette Van Der Merwe, Head of Family Medicine Professor Nathanial Mofolo, Professor Hanneke Brits, Dr Dirk Hagemeister, and a host of other great clinicians and administrators working at the University or the Free State Department of Health, the focus on the training program was shifted to try to include a greater degree of community based education as a ‘spine’ of training rather than as a two week block in isolation, along with a greater degree of inter-professional education (working with nurses, physiotherapists, and other allied health workers in teams as part of training to learn to treat a patient in their ‘entirety’ rather than as just a single clinical ‘problem’), and an increased training of ‘soft skills’ that would assist medical graduates not only with optimal long term patient care, but also with skills such as financial and business management capacity so that they would be able to run practices optimally, or at least know when to call in experts to assist them with non-clinical work requirements, amongst a host of other innovative changes. We, like many other Universities, also realized that it was important to try and recruit medical students from the local communities around the medical school in which they grew up, and to encourage as many of these locally based students as possible to apply for medical training, though of course selection of medical students is always a ‘hornets nest’, and it is very challenging to get it right balancing marks, essential skills and community needs of the many thousands of aspirant clinicians who wish to do medicine when so few places are available to offer them.

All of these medical training initiatives to try and initiate changes of what has become a potentially ‘skewed’ training system, as described above, are of course ‘straw in the wind’ without government backing and good strategic planning and communication by country-wide health boards, medical professional councils, and hospital administrators who manage staffing appointments and recruitment. As much as one needs to change the ‘focus’ and skills of medical graduates, the health structures of a country need to be similarly changed to be ‘focused’ on community needs and requirements, and aligned with the medical training program initiatives, for the changes to be beneficial and to succeed. Such training program changes and community based intervention initiatives have substantial associated costs which need to be funded, and therefore there is a large political component to both clinical training and health provision. In order to strategically improve the status quo, governments can choose to either encourage existing medical schools to increase student numbers and encourage statutory clinical training bodies to enact changes to the required medical curriculum to make it more GP focused, or build more medical schools to generate a greater number of potential GP’s. They can also pay GP’s higher salaries, particularly if they work in rural communities, or ensure better conditions of service and increased numbers of allied health practitioners and health assistants to lighten the stress placed on GP’s, in order to ensure that optimal community clinical facilities and health care provision is provided for. But, how this is enacted is always challenging, given that different political parties usually have different visions and strategies for health, and changes occur each time a new political party is elected, which often ‘hinders’ rather than ‘enacts’ required health-related legislation, or as in the case of contemporary USA politics, attempts to rescind previous change related healthcare acts if they were enacted by an opposition political party. There is also competition between Universities which have medical schools for increases in medical places in their programs (which result in more funding flowing in to the Universities if they take more students) and of course any University that wishes to open a new medical school (as my current employers, the University of Waikato wish too, and who have developed an exciting new community focused medical school strategic plan that fulfills all the criteria of what a contemporary focused GP training program should be, that will surely become an exemplary new medical school if their plan is approved by the government) is regarded as a competition for resources by those Universities who already run medical training programs and medical schools. Because of these competition-related and political issues, many major health-related change initiatives for both medical training programs and the related community and state structural training requirements are extremely challenging to enact, and are why so many planned changes become ‘bogged down’ by factional lobbying either before they start or when they are being enacted. This is often disastrous for health provision and training, as chaos ensues when a ‘half-changed’ system becomes ‘stuck’ or a new political regime or health authority attempts to impose further, often ‘half-baked’ changes on the already ‘half-changed’ system, which results in an almost unmanageable ‘mess’ which is sadly often the state of many countries medical training, physician supply, and health facilities, to the detriment both of patients and communities which they are meant to serve and support.

The way forward for clinical medical training and physician supply is therefore complex and fraught with challenges. But, having said this, it is clear that changes are needed, and brave folk with visionary thinking and strategic planning capacity are required to both create sound plans that integrate all the required changes across multiple sectors that are needed for the medical training changes to be able to occur, and to enact them in the presence of opposition and resistance, which is always the case in the highly politicized world of health and medical training. Two good examples of success stories in this field were the changes to the USA health and medical training system which occurred as a result of the Flexner report of 1910, which set out guidelines for medical training throughout the USA, which were actually enacted and came to fruition, and the development of the NHS system in the UK in the late 1940’s, which occurred as a result of the Beveridge report of 1942, which laid out how and why comprehensive, universal and free medical services were required in the UK, and how these were to be created and managed, and these recommendations were enacted by Clement Attlee, Aneurin Bevin and other members of the Labour government of that time. Both systems worked for a time, but sadly both in the USA and UK, due to multiple reasons and perhaps natural system entropy, both of these countries health services are currently in a state of relative ‘disrepair’, and it is obvious that major changes to them are again needed, and perhaps an entire fresh approach to healthcare provision and training similar to that initiated by the Flexner and Beveridge reports are required. However, it is challenging to see this happening in contemporary times with the polarized political status currently occurring in both countries, and strong and brave health leadership is surely required at this point in time in these countries, as always, in order to initiate the substantial strategic requirements which are required to either ‘fix’ each system or create an entirely new model of health provision and training. Each country in the world has different health provision models and medical training systems, which work with varying degrees of success. Cuba is an example of one country that has enacted wholesale GP training and community medicine as the centerpiece of both their training and health provision, though some folk would argue that they have gone too far in this regard in their training, as specialist provision and access is almost non-existent there. Therein lies an important ‘rub’ – clearly there is a need for more GP and community focused medical training. But equally, it is surely important that there is still a strong ‘flow’ of specialists and super-specialists to both train the GP’s in the specific skills of each different discipline of medicine, and to treat those diseases and disorders which require specialist-level technical skills. My own recent health scare exemplifies the ‘yin and yang’ of these conflicting but mutually beneficial / synergistic requirements. If it were not for the presence of a super-specialist with exceptional technical skills, I might not be alive today. Equally the first person I phoned when I noted concerning symptoms was not a super-specialist, but rather was my old friend and highly skilled GP colleague from my medical training days, Dr Chris Douie, who lives close by to us and who responded to my request for assistance immediately. Chris got the diagnosis spot on, recommended the exact appropriate intervention, and sent me on to the required super-specialist, and was there for me not just to give me a clinical diagnosis but also to provide pastoral care – in other words ‘hold my hand’ and show me the empathy that is so needed by any person when they have an unexpected medical crisis. In short, Chris was brilliant in everything he did as first ‘port of call’, and while I eventually required super-specialist treatment of the actual condition, in his role as GP (and friend) he provided that vital first phase support and diagnosis, and non-clinical empathic support, which is so needed by folk when they are ill (indeed historically the local GP was not just everyone’s doctor but also often their friend). My own example therefore emphasizes this dual requirement for both GP and specialist health provision and capacity.

Like most things, medical training and health care provision has like a pendulum ‘swung’ between specialist and generalist requirements and pressures in the last century. The contemporary perception, in an almost ‘back to the future’ way, is that we have perhaps become too focused on high technology clinical skills and training (though as above there will always be a place and need for these), and we need more of our doctors to be trained to be like their predecessors of many years ago, working out in the community, caring for their patients and creating an enduring life-long relationship with them, and dealing with their problems early and effectively before they become life-threatening and costly to treat and requiring the intervention of expensive specialist care. It’s an exciting period of potential world-wide changes in medical training and the clinical health provision to communities, and a great time to be involved in either developing the strategy for medical training and health provision and / or enacting it – if the folk involved in doing so are left in peace by the lobby groups, politicians and folk who want to maintain the current unbalanced status quo due to their own self-serving interests. Who knows, maybe even clinicians, like in the old days, will be paid again by their patients with a chicken, or a loaf of freshly baked bread, and goodwill will again be the bond between the community, the folk who live in them, and the doctors and healthcare workers that treat them. And for my old GP friend Chris Douie, who is surely the absolute positive example and role model of the type of doctor we need to be training, a chicken will heading his way soon from me, in lieu of payment for potentially saving my life, and for doing so in such a kind and empathetic way, as surely any GP worth his or her ‘salt’ would and should do!


Anxiety, Stress And The Highly Sensitive Person – Too Much Of Something Always Becomes A Bad Thing That Damages One In The End

I am one of those people that worries all the time. If there is an issue at work or at home that is of concern, I will up at 2.00 am in the morning wondering how best to solve it and worrying about it until I am sure it is solved. When all is as well as it can be I will find something to worry about – the plans for the future, pension funds (or lack of them), my kids health, anything and everything. In many ways this has been a good thing, as it has helped me always plan ahead, find solutions to problems and be aware of challenging situations as they develop, or even before they do. In many ways this has been a bad thing, as it means I get irritable and stressed when things are not working out well, and I am at the age when this continued mental ‘strain’ has the potential after many years of being the ‘status quo’ to cause cumulative physical damage to my body resulting potentially in such clinical conditions as migraines, high blood pressure, heart attacks, and strokes amongst others. There is clearly a genetic or physical environment component to this ‘worry’ state, as my father was very similar, and always seem to be worried when he was not almost overly exuberant and happy (there never was a middle ground with him, which made life as a child both fun and challenging), and for most of his adult life until he suffered a series of heart attacks in his early fifties, he smoked ninety cigarettes a day (and was in his early years ‘proud’ of this fact and his capacity to smoke prodigiously, given that in his era it was the ‘done thing’ to smoke) and was never to be seen without a cigarette in his hand, surely as an antidote for and a mechanism to assist him to cope with the stress he felt on a daily basis and which he surely worried about continuously. I have noticed since the advent of the mobile phone, during meetings I sit in at work, or when I go out for a social evening, folk around me check their phone for text messages or emails on a regular basis, with some folk doing so seemingly every few minutes, which is also surely a pathological sign of something ‘worrying’ these folk, or of a ‘worry’ type of personality in these folk who seem to need to check on information coming to them on an almost continuous basis. All these got me thinking about ‘worry’ – known clinically as anxiety – and what causes it to occur, and why some folk appear to feel it more than others and seem to be ‘highly sensitive’ to stressful situations.

Anxiety is defined as a worry about future events before they occur, and is different, though related, to the concept of fear, which is defined as a psychological reaction to current events. Related to both concepts are those of stress, homeostasis and allostasis. The theory of homeostasis suggests that our natural preferred state of existence is one where we are in ‘equilibrium’ with the environment in which we live, and our body and mind are in a ‘steady state’, free of requirements, needs and challenges. When this steady state we exist in is challenged, for example by low energy levels in the body, we notice this as a stressor to our steady state existence (‘hunger’ is the mechanism by which we ‘notice’ this particular stress factor), and this stress induces us to respond to it, by in this example generating actions and plans that will allow us to source and eat food, thereby increasing our body’s energy ‘levels’ back to the state in which we are comfortable and ‘happy’ with. Similarly if we become hot, we move to a place where cooler conditions exist. In more complex examples, if our social or community life changes in a way we feel uncomfortable with, we make plans and enact changes that will attenuate this social stress by either moving to a new place or environment, or taking steps to remove whatever or whoever is causing us discomfort if it is in our power to do so. The process of achieving stability, or homeostasis, using behavioural and psychological changes, has recently been described as allostasis (though some of us believe this is an unnecessary definition as the definition of homeostasis incorporates what is now described as allostasis). These allostatic responses attenuate stressful changes, or changes which are at least perceived as stressful by us, by means of releasing stress hormones in the body (for example cortisol) via the hypothalamic-pituitary-adrenal gland pathway in the body, or by activating the autonomic nervous system (for example the sympathetic nerves which are responsible for initiating ‘fight or flight’ responses in the body), or by releasing cytokines (which are humoral blood-borne ‘signallers’ which also induce a number of physical body responses to stress), or other systems which are generally adaptive in the short term. These pathways all induce a number of ‘general alarm’ or ‘specific response’ changes in the physiological systems and different organs in the body, such as increasing the concentration of glucose in the blood and re-distributing it to areas of the body that need it most as a result of the induced stress, increasing cardiac output, blood pressure and blood flow to specific organs in the body such as the muscles while reducing blood flow to the digestive and reproductive system, and altering the immune system response, amongst others – which all in turn lead to symptoms one ‘feels’ such as dry mouth, rapidly beating heat, increased breathing rate, shaking muscles, nausea, diarrhoea, and even dizziness and confusion in extreme conditions. Like all things, some stress and occasional activation of this stress response ‘allostatic’ system is beneficial to one both for reducing the targeted stress and for making the response systems more efficient by ‘practice’. But, like all things, if the stressor is not removed, or if multiple different stressors occur at once, and these responsive systems remain ‘wide open’, this can result in a status of ‘chronic response fatigue’ in these systems, and ultimately cause damage to the body by the very mechanisms which are designed to protect (for example a raised blood pressure allows blood to pumped quickly to targeted organs requiring increased blood flow for their optimal function, but chronically raised blood pressure causes ‘backflow’ problems to the heart which leads to heart failure eventually, or ‘forward flow’ problems to other organs such as the kidneys, which are eventually damaged by continuously increased blood pressure over a period of time). What is defined as the ‘allostatic load’ is the ‘wear and tear’ of the body (and mind) which increases over time when someone is exposed to repeated or chronic stress, and represents the physiological consequences of chronic exposure to the hormonal and neural responses described above which are ultimately damaging to the person who is ‘feeling’ the stress and whose body is continuously trying to react to it.

All of these allostatic responses are reactive to an already occurring, or perceived to be occurring, stressful situation or environment, and the sensation of fear would be the psychological accompanying emotion associated with perceiving such already occurring situations. But as described above, anxiety is somewhat different, in that it is a worry about future, rather than already occurring events. When one is anxious, one is thinking about all the potential, rather than actual, implications of possible scenarios that could occur based on ones ‘reading’ of current situations or events occurring around one that may, rather than will, occur and potentially impinge on one and possibly cause stressful situations at some time point in the future. Interestingly, anxiety ‘uses’, or is at least associated with, a number of the physical allostatic ‘response’ systems described above, such as the hypothalamic-pituitary-adrenal system, autonomic and interleukin systems, and a number of the symptoms of anxiety are associated with activity of these ‘fight or flight’ response systems and the physiological perturbations they induce. In episodes of acute anxiety (also known as panic attacks), symptoms including trembling, shaking, confusion, dizziness, nausea and difficulty breathing occur, all of which are induced by the allostatic stress-related pathways described above. While some anticipation of the future and resultant planning for it can only be good for one from a long term safety and security perspective, and therefore occasional anxiety can also be beneficial in ‘encouraging’ the planning of and ‘making ready’ future reactive plans for potential stressors one is concerned about after ‘reading the runes’ of one’s current life, generalized anxiety disorder is a clinical condition that is characterized by excessive, uncontrollable and often irrational worry about future events that occurs in between three and five percent of the population word-wide, where folk have a high level of anxiety about everyday problems such as health issues, finances, death, family / social / work problems, or anticipated catastrophic situations which are not commensurate with their actual level of probability of occurring. Individuals with chronic anxiety disorder have a wide variety of ‘psychosomatic’ (body and mind) symptoms, including fatigue, headaches, nausea, muscle aches and tension, numbness in their hands and feet, fast breathing, stomach pain, vomiting, diarrhoea, sweating, irritability, agitation, restlessness, sleep disorders and an inability to either control the anxiety and / or its physical symptoms. If not adequately controlled, generalized anxiety disorder can result in a number of what are known as chronic ‘lifestyle’ disorders, such as high blood pressure, diabetes, migraines, heart attacks and strokes, as well as depression or irritable bowel syndrome, as well as a host of what are defined as ‘psychosomatic’ disorders’. What causes an individual to develop a generalized anxiety disorder is currently not well understood (it occurs more often in folk who have a family history of it), but it most often begins to manifest itself between the ages of 30-35, but can also occur in childhood or late adulthood, and appears to ‘tap in’ and chronically activate the allostatic physiological response mechanisms described above.

Another interesting ‘relative’ of anxiety disorders is what has become known as the Highly Sensitive Person (HSP) ‘disorder’. Folk who are highly sensitive people have a high degree of what is known as sensory processing sensitivity, or in other words they appear to respond to, or be aware of physical body symptoms of stress and anxiety, or to social or environmental situations, to a greater degree than folk who do not ‘suffer’ from this disorder. Folk who have HSP ‘feel’ all these body allostatic responses in an extremely sensitive way, via mechanisms that are still currently not well understood. Because of this, they are also ‘hyper-aware’ of social situations or environments that may trigger the ‘release’ of these physiological anxiety / stress-related response pathways in their bodies (or vice versa and they may be hyper-aware of these social situations because of their natural ‘up-regulated’ physical sensory state). This HSP state is either a curse or a blessing (or both), as it makes folk who ‘suffer’ from it prefer low stimulation environments and try to construct their lives to avoid over-stimulation, and predisposes them potentially to higher risk of chronic stress / anxiety related disorders, but it also make them ‘feel’ life more, have more insight into and early awareness of developing social situations that others may not even be aware of, and make them more ‘intuitive’ to what is going on around them. Whether HSP folk have higher levels of anxiety or greater incidence of a generalized anxiety disorder is currently not well known, but given both ‘tap into’ the same allostatic physical body systems and mechanisms make it more likely that this is indeed so. It must be noted that the concept of a highly sensitive person has been differentiated from that of a hypersensitive person, who are defined as folk who over-react to any stimuli or slight. Folk with HSP may simply be quiet, appear introverted or ‘shy’, or are able to ‘hide’ their HSP ‘condition’, while hypersensitive folk are typically very challenging to deal with socially, but they also may have underlying anxiety as a cause of their over-reactions, ‘temper-tantrums’ and rages. The treatment of all of these different anxiety related disorders is challenging, and requires lifestyle change, psychological intervention (such as cognitive behavioural therapy) and / or medication, but there is always a relatively poor cure rate and a high degree of recidivism, and folk with anxiety and stress related disorders need to themselves understand, acknowledge and work on their underlying condition, though the problem for doing so is that a hyper-sensitivity responsive ‘state’ or condition is very difficult to understand, let alone treat. A number of folk use smoking, alcohol consumption, or avoidance behaviour, as methods of ‘dealing’ with their anxiety or high level of sensitivity, but these short term ’emollients’ create their own specific problems and may themselves paradoxically increase anxiety and stress in those that use them as a stress / anxiety reducing mechanism.

Worry, therefore, can be a useful thing to prepare one to enact future potential responses to what one is ‘picking up’ in one’s current circumstances that causes one to worry, if it continues for a short period of time only and if it is about a specific issue. Worry, if chronic or if it is a clinical disorder, through the allostatic pathways and circuits it uses to initiate and mediate ‘fight or flight’ body changes, can cause a wide array of unpleasant symptoms and diminish one’s quality of life, and can ultimately cause major physical damage to one’s body if one does not manage it carefully, or treat it as something that needs to be ‘cured’. The ‘trappings’ of modern society such as mobile phones and increased work and social connectivity and immediate communication capacity have many benefits, but these can also ‘tap into’ and reinforce these anxiety-related allostatic pathways and create continuous stress of their own making – it is likely that those folk who compulsively reach for their phones to check their messages every few minutes almost certainly have an anxiety disorder, or are prone to developing one, and future research is surely needed to ascertain the veracity of this possibility. I myself am a ‘worrier’, and almost certainly am a highly sensitive person, as was my father before me. This has created blessings and challenges both for us and those around us – life can be beautiful, but life can also be challenging, on a daily basis, with most of it ‘raging’ around in our own minds rather than in the ‘real’ life around us per se. At twenty five, I would have said the benefits of being and living such as a highly sensitive person and ‘worrier’ surely outweighed the challenges – the rose surely smelt better, the rain surely felt softer, the love was deeper, the anger stronger, the passion for life greater to and for us compared to how most folk around us probably experienced their less ‘perceived’ life. However, now I am about to reach the age of fifty, and am reaching the ‘tiger territory’ period of life for high blood pressure, heart attacks and other ‘diseases of a lived life’, I am not so sure, and the thought of a calm life, without worry, without stress, lived in soft colour and tranquil shades and hues, seems to be perhaps the better one, and one that should have been chosen as preferential way of living all those years ago, or at least changed to now I am more aware both of my own highly sensitive ‘condition’ and the potential negative effects such a life can have on one’s physical response mechanisms and body organs and physiological systems. But, at the end of the day, can one ever really ‘choose’ one’s own ‘sensitivity to stimuli’ levels? Perhaps our own anxiety and stress levels, or at least our own perception of them, were set in our ancestors body’s thousands of years ago and passed down to us, even if they are redundant as a ‘need’ in our modern life, and are therefore almost impossible to materially change despite our wishes and best efforts to do so. More research is needed to better understand if sensitivity to stimuli levels, and indeed those of anxiety itself, can ever be permanently attenuated, or rather if they stay permanently ‘as is’, and one merely learns rather how to cope and ‘deal with’ them better with the passing of time or with enhanced understanding, treatment or therapy.

One’s life will surely happen to oneself, as it does for each of us as we move through life and its challenges, whether one worries about it or not, or whether one ‘feels it’ more or less, I guess, but in many ways it surely ‘feels’ more like it is ‘happening to one’ when one worries about it than when one does not – though doing so appears to damage one’s physical survival mechanisms by over-use as part of the process. It must be wonderful to live a life in the always warm, always comfortable environment which is the one in which has no worries. But, equally, one can never maintain a hot fire without some internal combustion occurring which creates the heat, or even more so, put out a fire once it has been burning for a long time and has created the ‘heat’ which is manifestly evident in the life lived with maximal sensitivity to stimuli and responsivity to all around it. Would one choose to put this ‘fire’ out and reduce the ‘heat’ in oneself if one could do so? How one answers that question will perhaps ascertain for oneself where on the spectrum of anxiety and sensitivity to stimuli scale one is, or at least where one would like to be (without the need to reach for one’s mobile phone to get the answer to it as we do these days, or lighting up a cigarette in order to help one reflect on it like they did in my old man’s days). I’ll ponder this question myself as I listen with delight to the sound of the birds chirping in the garden outside that ‘feels’ as if they ‘pierce’ my ears, as I sip my coffee and go through what I have written this morning wondering if it has been a good or bad writing session, as I bang the table in frustration when I discover that my printer has run out of ink and I can’t print it out for my records, and as at the same time I worry if I have all my ‘ducks in a row’ ahead of those important meetings I have at work on Tuesday after the public holiday Monday. Reflect, reflect, reflect. Worry, worry, worry. For some there is no peace, even on the quietest of days!


Anterior Cruciate Knee Ligament Injuries – The End Of The Affair For Most Sports Careers Despite The Injury Unlocking Exquisite Redundant Neuromuscular Protective Mechanisms

I was watching a rugby game recently and saw a player land wrongly in a tackle and immediately collapse to the ground clutching his knee joint, and heard later that he had suffered a ruptured anterior cruciate ligament injury that would require nine months post-injury before he would be able to return to his chosen sport. Many years ago in my student days, after a few too many beers at a party, I jumped off a low wall, landed wrongly, and tore the meniscus in my left knee. The next day it had swollen up, but I did not think much of it and tried to drive to University, and always remember the horror I felt when getting to the bottom of the road and I tried to push in the clutch with my left leg to allow use of the brake at the stop street, and my leg would not react at all, and I only avoided an accident by turning off the car while working the brake pedal with my right foot. It always puzzled me afterwards why my leg would not respond at all despite my ‘command’ for it to do so, as even with the injury, I expected, while perhaps it might be painful to do so, that I would still have reasonable control over my leg movements, which appeared okay when walking slowly to the car and taking my weight on my uninjured leg. Perhaps this triggered a ‘deep’ interest in what controlled our muscles and other body functions, and when I started a PhD degree with Professors Tim Noakes, Kathy Myburgh and Mike Lambert as my supervisors at the University of Cape Town in the early 1990’s, I chose to look at neural reflexes and brain control mechanisms regulating lower limb function after anterior cruciate ligament knee injury. So what happens when the knee joint suffers a major injury, and can one ever ‘come back’ from it?

The knee joint is one of the most precarious joints in the body, and as compared to the hip and shoulder joints, which have quite a degree of stability generated by their ‘ball and socket’ design, it is simply made up of three individual bones (the femur, tibia and patella) moving ‘over’ each other while being attached to each other with a number of ligaments and muscles, which are pretty much all that creates stability in and around the knee joint. The knee mostly moves in a backwards / forwards (in medical terms flexion and extension) plane, and has a small degree of rotation inwards and outwards, but is basically a ‘hinge’ type joint that moves in one plane only. The major ligaments of the knee joint preventing too much flexion and extension are the anterior cruciate ligament (ACL), which prevents hyper-extension (the lower limb calf region moving too far ‘forwards’ relative to the upper thigh) and the posterior cruciate ligament (PCL), which prevents hyper-flexion of the knee joint. There are also relatively strong ligaments on each side of the knee joint (the medial and lateral collateral ligaments), as well as several ligaments and tendons securing the patella in place in the front of the knee. Two large pieces of cartilage, the medial and lateral menisci, ‘sit’ on the tibia and allow smooth movement to occur across the entire range of movement between the two big bones (femur and tibia) of the knee joint and protect each of these from damage which would occur if they ‘rammed’ into each other each time the bone moved without the protection of the two menisci.

While these ligaments (and there are several others in the knee joint beyond those I have described above), tendons and menisci provide the majority of support to maintain the fidelity of the knee joint, the surrounding muscles – particularly the quadriceps and hamstrings muscles – also provide important secondary support to the knee joint during active movement such as walking or running, when a greater degree of dynamic stability beyond the static stability the ligaments and tendons supply, is needed. So muscles are not just creators of movement, they are also important stabilisers of the body’s joints, and there needs to be a high degree of dynamic control of them by the central nervous system during movement to ensure things work ‘just right’ with not too much and not too little force being applied to the joint at any one time during any movement. The hamstring muscles have been shown to be agonists (assistants) of the ACL, and when they fire they ‘pull back’ the lower part of the knee joint so as to reduce pressure on the ACL when the knee extends to its limits, while the quadriceps muscles similarly assist the PCL from having too much pressure on it associated with too much flexion of the knee joint (though only at certain angles of the knee joint and not through its entire range of movement). Interestingly, the quadriceps muscles are not just agonists of the PCL, but also are ‘antagonists’ of the ACL, and their activation can also increase hyper-extension pressure on the knee joint (and therefore on the ACL) when the quadriceps contracts particularly when the knee is in an extended position. So the quadriceps muscles can be the ‘friend’ of the ACL and knee joint, but can also be its ‘foe’.

What is fascinating in this process is the structure and function of the nerve pathways both from and to all of the knee joint, ACL and muscles around them, and how these nerve pathways act differently in an intact ACL as compared to the damaged ACL state. In the intact ACL are mechanoreceptors (receptors which pick up mechanical pressure) which fire when the ACL is put under pressure / moves, and they send information back via nerves to the spinal cord, and cause increased firing of the hamstring muscles, in order to protect both the ACL and integrity of the entire knee joint. When the ACL is ruptured, receptors called free nerve endings in the surrounding capsule of the knee joint fire in response to movement of the entire knee joint, which would happen to a greater degree in the absence of the ACL after it ruptures, and importantly, these injury associated capsular free nerve ending reflexes don’t just increase the firing to the hamstrings muscles, they at the same time reduce firing to the quadriceps muscle, in order to protect the knee from further damage which could occur if the quadriceps were active maximally in the absence of the ACL. This free nerve ending pathway is known as a redundant pathway, as it only ‘fires’ when the ACL is damaged, and does not do so normally. Interestingly, the redundant free nerve ending related pathway does not seem to stop working even if the ACL is repaired or replaced, which means that even if one fixes the ligament materially, one cannot ever completely repair the sensitive neuronal control pathways as part of the operation.

While these redundant neural firing pathways are protective and are designed to help the knee from incurring further damage, they are unfortunately not helpful in allowing athletes who suffer ACL injuries from getting back to their full strength and a return to sport with the one hundred percent function they had prior to suffering the injury. The quadriceps muscles inhibitory firing pathway is particularly a problem from a return to sport perspective, as it means that the quadriceps muscles will always be weaker than before the ACL injury, and this is born out from most studies of quadriceps strength after injury, which show a continued deficit of at least 5-10 percent injured limb compared to the unaffected limb, and that is when rehabilitation of the injured limb is done post-injury or operation, and is even higher when it is not. Furthermore, the altered firing synergies, even those of the increased hamstring firing, appear to be sub-optimal from a functional pattern of movement perspective, even if they are protective, and there even appears to be whole body / both limb firing pattern changes, with athletes favouring the injured leg and taking more weight on the uninjured limb even if they are unaware of themselves doing this (though some folk speculate that using crutches for a prolonged period of time after ACL injury may be in part a cause of these whole limb and gait changes). These changes surely are at least to a degree responsible for the high rate of re-injury of the damaged ACL observed in those athletes who return to competitive sport after ACL injury, and potentially the high rate of ACL or other knee joint injury in the unaffected limb which some folk suggest occurs with return to sport after ACL injury.

So therefore, sadly for those who suffer ACL (and other) knee injuries and want to return to competitive sport, or to their pre-injury level of sport, redundant neural mechanisms between the knee joint and the surrounding muscles, while functionally being designed to give a measure of protection to the knee joint in the case where the ACL is damaged or absent, paradoxically ensures by its very activity that the function of the surrounding muscles is attenuated, particularly in the quadriceps muscle, and they will never have ‘full’ functional activity of the knee joint after the injury, despite them having a brilliant surgeon who performs a perfect mechanical replacement of the ACL surgically, and despite the best rehabilitative efforts of either the athlete or those assisting them with their rehabilitation. An athlete has two choices after suffering an ACL injury (and other associated ligament injuries which worsen the prognosis even more). Firstly, they can attempt to return to their sport as they did it before their injury but with changing how they perform it by ‘compensating’ for their injury – if in team sports by improving other aspects of their game so that their reduced capacity for agility and speed after injury is not ‘noticed’, and in individual sports by altering pacing strategy or style of performing their sport (though particularly in individual sports this is not really an option and the loss of competitive capacity is ‘painfully obvious’), and with the awareness that that they have a good chance of re-injuring themselves. Secondly, they can downgrade their expectations and level of sport, either retiring from their sport if competitive or changing the level of intensity they routinely perform their sport to a lower level, as hard as it is for athletes to come to terms with having to do this. But there is no ‘going back’ to what life was like before the injury, and this creates a potential ethical dilemma for those involved in rehabilitating athletes after ACL injury – if one works on increasing for example their quadriceps strength, one is ‘going against’ a natural protective mechanisms ‘unlocked’ by the ACL injury, and one may be paradoxically increasing the chances of future damage to the athlete by the very rehabilitation one is trying to help them by doing it, and one should perhaps rather be ‘rehabilitating’ them by working on their psychological mindset so that they are able to come to terms with the concept of permanent loss of some function of their injured knee and the need to potentially look for alternative sporting outlets or methods of earning their salaries.

The wonderful period of my life as a PhD student back in the early 1990’s, learning about these exquisite neuromuscular protective mechanisms surrounding the knee joint that are ‘activated’ after knee ligament injury (and potentially meniscal injury too), started a lifelong work ‘love affair’ with the brain and the regulatory mechanisms controlling the different and varied functions of the body, that has lasted to this day, and ‘unlocked’ a magical world for me of neural pathways and complex control processes that has ensured for me a lifetime without boredom and never a moment when I don’t have something to ponder on, apart from initiating an amazing ‘journey’ trying to understand how ‘it all works’. But this scientific exploration has not helped me fix my knee joint after the injury all those years ago – my left leg has never been the same again after that injury which required a full meniscectomy eventually as treatment, and still swells up if I run at all and even if my cycle rides are too long, and the muscles around the affected knee have never been as strong as they were no matter how much gym I do for them. So by understanding more about the nature of the mechanisms of response to something as major as anterior cruciate ligament knee injury, I have also come to understand more about the concepts of fate and acceptance of things, and that a single bad landing (or indeed having one beer too many leading to that bad landing) can create consequences that there are no ‘going back’ from, and that will change one’s life forever. After a bad knee injury, nature has given us the capacity for a ‘second chance’ by having these redundant protective mechanisms, but that second chance is designed to work at a slower and more relaxed pace, and with the caution of experience and the conservatism the injury engenders, rather than with the freedom of expression that comes with youth and the feeling of invincibility associated with it. Rivers do not flow upstream, and we don’t get any younger as each day passes, and our knee joints sadly will never be the same again after major injury, despite the best surgery and rehabilitation that one gets and does for them. Nature ensures this ‘reduction in capacity’ happens paradoxically for our own ‘good’, and the biggest challenge for clinicians is to understand this and convey that message to the athletes they treat, and for athletes it is to accept this potential ‘truism’ too, and let go of their sporting ambitions and find a quieter, more sedate life sitting on the bank of the river they used to ride the flow of prior to suffering their knee injury. But please left knee, let me have a few more good bike rides in the cool morning air far from the madding crowd, before you pack up completely!


Athlete Pre-Screening For Cardiac And Other Clinical Disorders – Is It Beneficial Or A Classic Example Of Screening And Diagnostic Creep

Last week the cycling world was rocked by the death of an elite cyclist, who died competing in a professional race of an apparent heart attack. A few years ago when living in the UK, the case of a professional football player who collapsed in the middle of a game as a result of having a heart attack, and only survived thanks to the prompt intervention of pitch-side Sports Medicine Physicians and other First Aid folk received a lot of media attention, and there were calls for increased vigilance and screening of athletes for heart disorders. Many years ago, one of my good friends from my kayaking days, Daniel Conradie, who apart from being a fantastic person won a number of paddling races, collapsed while paddling in the sea and died doing what he loved best of an apparent heart attack. Remembering all of these incidents got me thinking of young folk who die during sporting events, and if we clinical folk can prevent these or at least pick up potential risk factors in them before they do sport, which is known as athlete screening, or pre-screening of athlete populations, and which is still a controversial concept and is not uniformly practiced across countries and sports for a variety of reasons.

Screening as a general concept is defined as a strategy used in populations to identify the possible presence of an ‘as-yet-undiagnosed’ disorder in individuals who up to the point of screening have not presented or reported either symptoms (what one ‘feels’ when one is ill) or signs (what one physically ‘presents with’ / what the clinician can physically see, feel or hear when one is ill). Most medicine is about managing patients who present with a certain disorder or symptom complex who want to be cured or at least treated to retain an optimal state of functioning. Screening for potential disorders is as described a strategic method of pre-emptively diagnosing a potential illness or disorder, in order to treat it before it manifests in an overt manner, in the hope of reducing later morbidity (suffering as a result of an illness)and mortality (dying as a result of the illness) in those folk being screened. It is also enacted to reduce the cost and burden of clinical care which would be the result of the illnesses not being picked up until it is too late to treat them conservatively with lifestyle related or occupational changes, and costly medical interventions are needed which put a drain on the resources of the state or organizing body which consider the need for screening in the first place. Universal screening involves screening all folk in a certain selected category (such as general athlete screening), while case finding screening involves screening a smaller group of folk based on the presence of identified risk factors in them, such as if a sibling is diagnosed with cancer or a hereditary disorder.

For a screening program to be deemed necessary and effective, it has to fulfil what are known as Wilson’s screening criteria – the condition should be an important health problem, the natural history of the condition should be understood, there should be a recognisable latent or early symptomatic stage, there should be a test which is easy to perform and interpret and is reliable and sensitive (not have too many false positive or false negative results), the resultant treatment of a condition diagnosed by the condition should be more effective if started early as a result of screening-related diagnosis, there should be a policy on who should be treated if they are picked up by the screening program, and diagnosis and treatment should be cost-effective, amongst other criteria. Unfortunately, there are some ‘side-effects’ of screening programs. Overscreening is when screening occurs as a resultant of ‘defensive’ medicine (when clinicians screen patients simply to prevent themselves being sued in the future if they miss a diagnosis) or physician financial bias, where physicians who stand to make financial gain as a result of performing screening tests (sadly) advocate large population screening protocols in order to make a personal profit from them. Screening creep is when over time recommendations for screening are made for populations with less risk than in the past, until eventually the cost/benefit ration of doing them becomes less than marginal, but they are continued for the same reasons as for overscreening. Diagnostic creep occurs when over time, the requirements for making a diagnosis are lowered with fewer symptoms and signs needed to classify someone as having either an overt disease, or when folk are diagnosed as having a ‘pre-clinical’ or ‘subclinical’ disease. Patient demand is when patients push for screening of a disease or disorder themselves after hearing about them and being concerned about their own or their family’s welfare. All of these contribute to making the implementation of a particular screening program to be almost always a controversial process which requires careful consideration and an understanding of one’s own personal (often subconscious) biases when making decisions related to screening or not screening populations either as a clinician, health manager or member of the public.

Regarding specifically athlete screening, there is still a lot of controversy regarding who should be screened, what they should be screened for, how they should be screened, and who should manage the screening process. Currently, to my knowledge, Italy is the only country in the world where there is a legal requirement for pre-screening of athlete populations and children before they start playing sport at school (including not just physical examination but also ECG-level heart function analysis). In the USA, American Heart Association guidelines (history, examination, blood pressure and auscultation – listening to the heart with a stethoscope – of heart sounds) are recommended but practice differs between states. In the UK, athlete screening is not mandatory, and the choice is left up to different sporting bodies. In the Nordic countries, screening of elite athletes is mandated at the government level, but not all athlete populations as per what happens in Italy. There is ongoing debate about who should manage athlete screening in most countries, with some folk feeling it should be controlled at government level and legislated accordingly, other folk suggesting it should be controlled by professional medical bodies such as the American Heart Association in the USA or the European Society of Cardiology in Europe, while other folk believe it should be controlled by the individual sporting bodies which manage each different sporting discipline or even separately by the individual teams or schools that want to protect both the athletes and themselves by doing so. Obviously who pays for the screening factor is a large factor in these debates, and perhaps there is no unanimity in policy across countries, clinical associations and sporting bodies as described above, because of this.

The fact that there is no clear world-wide policy on athlete screening is on the one hand surprising, given the often emotional calls to enact it each time a young athlete dies, and also because the data from Italian studies has shown that the implementation of their all-population screening programs has reduced the incidence of sudden death in athletes from around 3.5/100 000 to around 0.4/100 000 (for those interested these data are described in a great study by Domenico Corrado and colleagues in the journal JAMA). But, the data described also suggests that there is a relatively low mortality rate to start with – from the above figures of 100 000 folk playing sport, only 3.5 of these died when playing sport before the implementation of screening, and a far higher number of folk die each day from a variety of other clinical disorders. The number of folk ‘saved’ is also very small in relation to the cost – a study by Amir Halkin and colleagues calculated that based on cost-projections of the Italian study, a 20 year program similar to that conducted by the Italians over 20 years of ECG testing of young competitive athletes would cost between 51 and 69 billion dollars and would save around 4800 lives, and the cost therefore per life saved was likely to range between 10 and 14 million dollars. While each life lost is an absolute tragedy both for that person and their family and friends, most lawmakers and government / governing bodies would surely think very carefully before enacting such expensive screening trials, with such low cost/benefit ratios, again with high burdens of other diseases that require their attention and funds on a continuous basis to be managed in parallel with athlete deaths. So from this ‘pickup’ rate and cost/benefit ratio perspective one can see there is already reason for concern regarding the implementation of broad screening trials for athlete populations.

Of equal concern is that of the level of both false negative and false positive tests associated with athlete screening. False negatives occur when tests do not pick up underlying abnormalities or problems, and in the case of heart screening, if one does not include ECG evaluation in the testing ‘battery’ there is often a high rate of false negative results described for athlete testing. Even using ECG’s are not ‘fail-proof’, and some folk advocate that heart-specific testing should include even more advanced testing than ECG can offer, including ultrasound and MRI based heart examination techniques, but these are very expensive and even less cost effective than those described above. False positives occur when tests diagnose a disorder or disease in athletes that is not clinically relevant or indeed does not exist. In athletes this is a particular problem when screening for heart disorders, as doing exercise routinely is known to often increase heart size to cope with the increased blood flow requirements which are part of any athletic endeavour, and this is called ‘athlete’s heart’. One of the major causes of sudden death is a heart disorder known as hypertrophic cardiomyopathy, where the heart pathologically enlarges or dilates, and it is very difficult to tell the difference on most screening tests between athletes heart and hypertrophic cardiomyopathy, with several folk diagnosed as having the latter and prevented from doing sport, when their heart is ‘normally’ enlarged rather than pathologically as a result of their sport participation. A relevant study of elite athletes in Australia by Maria Brosnan and colleagues found that when testing them using ECG level heart test, of 1197 athletes tested, 186 of these were found to have concerning ECG results (in their studies using updated ECG pathology criteria this number dropped to 48), but after more technically advanced testing of these concerning cases, only three athletes were found to have heart pathology that required them to stop their sport participation, which are astonishing figures from a potential false positive perspective. Such false-positive tests can result in potential loss of future sport related earnings or other sport participation related benefits.

Beyond false-negative and false-positive tests, there are a number of other factors which ensure that mass athlete screening remains controversial. For example, Erik Solberg and colleagues reported that while the majority of athletes were happy to undergo ECG and other screening, 16% of football players were scared that the pre-screening would have consequences to their own health, while 13% of them were afraid of losing their licence to play football, and 3% experienced overt distress during pre-screening itself because of undergoing the tests per se. The issue of civil liberties versus state control therefore needs to come into consideration in debates such as screening of athletes as a ‘blanket’ requirement if it is enacted. While most athlete screening programs and debate focusses on heart problems, there are a number of other non-cardiac causes of sudden death in athletes, such as exercise-induced anaphylaxis (an acute allergic response exacerbated by exercise participation), exercise-associated hyponatremia, exertional heat illness, intracranial aneurysms and a whole lot of other clinical disorders, and the debate is further complicated by whether these ‘other’ disorder should be included in the screening process. Furthermore, most screening programs focus on young athletes, while a large number of older folk begin doing sport at a later age, often after a long period of sedentary behaviour, and these older ‘new’ or returning sport enthusiasts are surely at an even higher risk of heart-related morbidity or mortality during exercise, and therefore one needs to think of whether screening should incorporate such folk too. However, whether there should be older age specific screening for a variety of clinical disorders is as hotly debated and controversial as it is young athlete screening, and adding screening of them for exercise specific potential issues surely complicates the matter to an even greater degree, even if an argument can be made that it is surely needed.

In summary therefore, screening of athletes for clinical disorders that may harm or even kill them during their participation in the sport they perform is still a very controversial area of both legislation and practice. There is an emotional pain deep in the ‘gut’ each time one hears of someone dying in a race, and a feeling that as a clinician or person that one should do more, or more should be done to ‘protect them from themselves’ using screening as the tool to do so. But given the low cost/benefit ratio from both a financial and ‘pickup’ perspective, it is not clear if making a country-wide decision to conduct athlete screening is not an example of both screening and diagnostic creep, or if athlete screening satisfies Wilson’s criteria to any sufficient degree. If I was a government official, my answer to whether I would advocate country-wide screening would be no based on the low cost/benefit ratio. If I was a member of a medical health association, to this same question I would answer yes, both from an ethical and a regulatory perspective, as long as my association did not have to foot the bill for it. If I was head of a sport governing body, I would say yes to protect the governing body’s integrity and to protect the athletes I governed, as long as I did not have to foot the bill for it. If I was a clinical researcher, I would say no, as we do not know enough about the efficacy of athlete screening and because there is a too high level of false-positive and false-negative results. If I was a sports medicine doctor I would say yes, as this would be my daily job, and I would benefit financially from it. If I was an athlete, I would be ambivalent, saying yes from a self-protection perspective, but saying no from a job and income protection perspective. If I was the father of a young athlete, I would say yes, to be sure my child is safe and would not be harmed by playing sport, but I would also worry about the psychological and social aspects if he or she was prohibited from playing sport as a result of a positive heart or other clinical screening test. It is in these conflicting answers I myself give when casting myself in these different roles, to which I am sure if each of you reading this article answered yourself would also similarly give a wide array of different responses, is perhaps where the controversy in athlete screening originates and what will make it always contentious. I do think that if as a newly qualified clinician back then in our paddling days, if I had tested my great friend Daniel Conradie’s heart function and found something that was worrying and suggested he stop paddling because of it, he would probably have told me to ‘take a hike’ and continued paddling even with such knowledge. I am sure as a young athlete I would have done similar if someone had told me they were worried about something in my health profile back then but were not one hundred percent sure of it having a negative future consequence on my sporting activity and future life prospects. Athlete screening tests and decisions related to them will almost always be about chance and risk, rather than certainty and conclusive determination of outcomes. To race or not race, based on a chance of perhaps being damaged by racing, or even dying, given the outcome of a test that warns you, but may be either false-positive or false-negative, that is the question. What would you do in such a situation, as an athlete, as a governing body official, or as a legislator? That’s something to ponder that doesn’t seem to have an easy answer, no matter how tragic it is to see someone young dying while doing what they love doing best.


Chronic Fatigue Syndrome – Is This Contemporary Neurasthenia An Organic Neurological Or Psychiatric Disorder Associated With Childhood Trauma Related Chronic Anxiety And Resultant Ego Depletion

I was watching the Two Oceans running marathon in Cape Town yesterday on the square box, and marvelled not only at the aesthetic beauty of Cape Town, but also at how many folk of all ages ran the iconic race, and at their visible efforts to resist the sensations of fatigue they were clearly all feeling as the race reached its endpoint and as they laboured valiantly to reach the finish line in the fastest time possible for each of their abilities. Some recently published top-notch research articles on the mechanisms of fatigue by Roger Enoka, Romain Mueusen and Markus Amman, amongst others (surely with Simon Gandevia the scientists who have shaped our contemporary view of fatigue more than anyone else) have been doing the ’rounds’ amongst us science folk on research discussion groups the last while, and has ‘reignited’ an interest in the field in me. A large period of my research life was involved in trying to understand the mechanism behind the symptoms of fatigue, mainly in athletes, but also in those suffering from the clinical disorder known as chronic fatigue syndrome. As I come up quickly to the big age of 50 later this year, I notice that the daily physical and mental activity which I used to do with ease in my youth fatigue me more easily now. Because of this I have to ‘pace’ myself more carefully in all aspects of life to ‘preserve’ energy to ‘fight the good fight’ another day, in order to not run the risk of collapsing completely in the manner I witnessed in those folk with chronic fatigue syndrome I tried to assist both as a clinician and scientist during my earlier career, who pushed too hard and subsequently became moribund because of it. All of these recent observations have got me thinking of chronic fatigue syndrome (CFS), also known as myalgic encephalomyelitis (ME), what causes it, and why it manifests in some folk and not others.

Fatigue is a complex emotion which is felt by all folk on a daily basis, but paradoxically is very difficult to define. It has mental and physical symptoms and signs, and is often increased by and related to exertion of any kind. Fatigue can be either acute, where there is a direct correlation of the symptoms of fatigue to a specific task or activity and the symptoms attenuate when the activity ends, or can be chronic, when the symptoms of fatigue remain for a prolonged period and are not attenuated by a period of rest, and the reasons for these chronic symptoms remaining are very difficult to understand. In the sporting world, chronic fatigue is caused by pushing oneself too long and too hard in training and racing, and is known as over-training syndrome, and has a symptom complex which includes apart from the symptom of extreme fatigue also those of ‘heavy legs’, increased waking pulse rate, sleep disorders, weight loss (or weight gain), lack of motivation, depression and decreased libido, which do not improve unless there is a prolonged period of rest with no physical training. Working at the University of Cape Town with great scientists Mike Lambert, Liesl Grobler, Malcolm Collins, Karen Sharwood, Wayne Derman, and others, for my medical doctorate in the late 1990’s we examined athletes who were moribund from over-training, and found that a number of them had pushed themselves so hard and so long that they had developed skeletal muscle pathology (damaged mitochondria in particular) to go with all these chronic fatigue symptoms, and we called this symptom complex the fatigued athlete myopathic syndrome, and later acquired training intolerance. The words the athletes we examined used to describe their symptoms were classic and perhaps ‘explained’ the issues better than scientific or medical terms – with one sufferer declaring that they had ‘no spring in the legs’, another that ‘one kilometre now feels what equalled 100 km previously’, and another that ‘at its peak, the fatigue left me halfway between sleeping and waking most of the time’. Although there was perhaps a degree of hubris in these self-reported symptoms of fatigue, all these folk felt that the symptoms profoundly affected their exercise performance and lifestyle. Significantly, the majority of folk had evidence of suffering from depression, and also did not want to stop training and racing, and indeed found it almost impossible to stop training and racing despite these profound symptoms of chronic fatigue.

I carried on my interest in this field when moving to Northumbria University in the UK in 2006, and assisted Paula Robson Ansley and her PhD student Chris Toms, who did some great work examining causation, clinical testing of and exercise prescription for folk with classical chronic fatigue syndrome, as opposed to those with acquired training intolerance (though there is surely a relationship between these syndromes). Folk with CFS have symptoms of chronic and extreme fatigue which is persistent or relapsing, present for six months or longer, not resulting from ongoing exertion, not attenuated substantially by rest and causing impairment of activities which were previously easy to perform. They also have four or more ‘other’ diagnostic criteria, including impaired memory or concentration, sore throat, tender cervical / axillary lymph nodes, muscle pain, multi-joint pain, headaches, unrefreshing sleep or post-exercise malaise. It is importantly a diagnosis of exclusion of other medical causes of fatigue such as cancer, TB, endocrine or hormonal imbalances, or psychiatric or neurological disorders, and a clinician must always be careful to exclude these specific organic medical causes before diagnosing someone with CFS. The cause of CFS is unknown and hotly debated – it is usually precipitated by a viral infection such as Ebstein Barr Virus infection (glandular fever), and viral or infective causes, immune function issues, toxic pathogens or chemicals have all been suggested to be the cause of CFS, but not all folk who have CFS have any or all of these potential triggers or causal agents as part of their presenting history. It is notoriously difficult to treat, and some folk are left moribund and with significantly impaired lives for decades, although in some folk the syndrome seems to ‘burn out’ and they improve with time or learn to live with their symptoms by managing them carefully. Unfortunately there is a high level of suicide in folk suffering from CFS, though it is not clear if this is related to the underlying causation of the disorder or due to its long-term effect on lifestyle and physical capacity.

What is interesting (and of concern) for those folk studying CFS and trying to understand its aetiology and how to treat it, is the controversy and level of emotion attached to its diagnosis and treatment. Chronic fatigue syndrome used to be more well known as myalgic encephalomyelitis (ME), first diagnosed in the 1950’s after a group of doctors and nurses in a specific hospital developed post-viral syndrome with symptoms including chronic fatigue and with some neurological muscle and central nervous system related symptoms (hence the name ME) and it was first thought to be a neurological disorder. But with time, and as it was found that more folk who were diagnosed with ME did not have classic ‘organic’ neurological signs, it became thought of more as a psychiatric disorder and became more often described as CFS, due to the predominant symptomatology of fatigue as being the major ‘descriptor’ of the disorder. What is astonishing is that, as well described in a fascinating article by Wotjek Wojcic and colleagues at Kings College, London, in a survey of neurologist specialist members of the British Neurologist Association, 84% of respondents did not view CFS as a neurological disorder but rather as a psychiatric disorder. But, paradoxically, a number of patients with CFS would prefer it to be described as a neurological rather than a psychiatric disorder (and would prefer it to be still called ME), because of the social stigma of the label of having a psychiatric disorder. Somewhat astonishingly, as described by Michal Sharpe of the University of Edinburgh, there was even a negative response to a study of his which found that cognitive behavioural therapy and graded exercise therapy (the PACE trial) helped improved the symptoms of sufferers of CFS/ME, with several major patient organizations apparently dismissing the trial findings and being critical of them, because the findings could suggest that the syndrome was psychiatric in origin if cognitive behavioural therapy worked, rather than what would be the case if it was an organic neurological disorder, in which case such therapy should not work. As Sharpe concluded, in his own words it is a ‘funny old world’ when a study shows that a therapy works, but patients are angry because they didn’t want it to work, because of the stigma it would potentially create by it working.

Wojcic and colleagues also made the point that the majority of symptoms of CFS are almost identical to that of neurasthenia, a psychiatric disorder which was prominent in the 1800’s and early 1900’s, but has become almost unheard of as a diagnosis in contemporary times. Neurasthenia was described as a ‘weakness of nerves’ by George Beard in 1869, and as having symptoms of fatigue, anxiety, headache, heart palpitations, high blood pressure, neuralgia (pain along the course of a specific nerve) and depressed mood associated with it. The ICD-10 definition of neurasthenia is that of having fatigue or body weakness and exhaustion after minimal effort, which is persistent and distressing, along with depressive symptoms and two of the symptoms of either muscle aches and pains, dizziness, tension headaches, sleep disturbances, inability to relax, irritability and dyspepsia (indigestion). William James referred to neurasthenia as ‘Americanitis’ (he suffered from neurasthenia himself) as so many Americans in the 1800’s were diagnosed with it, particularly women, and it was a ‘popular’ diagnosis whose treatment was either a rest cure or electrotherapy. In world war one neurasthenia was a common diagnosis for and of ‘shell shock’, and folk with shell shock related neurasthenia were treated with prolonged rest. In the 20th century neurasthenia was increasingly thought of as a behavioural rather than a physical condition, and eventually it ‘fell out of favour’ and was ‘abandoned’ as a medical diagnosis. As Wojcic and colleagues suggest, not just the symptoms, but the ‘trajectory’ of the classification of the disorder have and follow a strikingly similar pattern to that of CFS/ME, which also started off as being diagnosed as an organic / neurological disorder and is now thought of a psychiatric disorder, which is (sadly) increasingly stigmatized by lay folk and indeed even some clinicians.

Neurasthenia was thought by Beard to being caused by ‘exhaustion’ of the central nervous system’s energy reserves, which he attributed to the (even in those days) stresses of urbanization, increasingly competitive business environment and social requirements – it was thought that neurasthenia was mostly associated with ‘upper class’ folk and with professionals working in stressful environments. Sigmund Freud thought there was a strong relationship to anxiety and to the basic ‘drives’, and as he almost always did, related neurasthenia to ‘insufficient libidinal discharge (ie not enough sex) that had a poisonous effect on the organism’. Both Freud and Carl Jung believed that drives were the result of the ‘ego’ state, and that disorders such as neurasthenia were a result of imbalances in this ego state. In their model, the ‘id’ was the basic component of the subconscious psyche which encompassed all our primitive needs and desires. The ‘ego’ was the portion of the psyche which maintains the sense of self, and recognizes and tests reality. A well-functioning ego perceives reality and differentiates the outer world from inner images and desires generated by the id, and ‘controls’ these. The ego develops in the first part of life, and is associated with a history of object cathexes. Cathexes are attachments of mental or emotional energy upon an idea or object. Object cathexes are generated by the id, which ‘feels’ erotic and other ‘trends’ as needs. The ego, which to begin with is feeble, becomes aware of these object cathexes, and either acquiesces or understands these needs and manages them (and thus becomes ‘strong’) or is disturbed by them and ‘fends’ them off by the process of repression (and becomes weak and ‘conflicted’). If weak, the ego deals with its inadequacy by either repressing unwanted thoughts (thrusting back by the ego from the conscious to the unconscious any ideas of a disagreeable nature) or developing a complex (a group of associated, partially or wholly represented ideas that can evoke emotional forces which influences an individual’s behaviour, usually ‘outside’ of their awareness). As a result of these complex developments, folk either use projection, which is a mental mechanism by which a repressed complex is disguised by being thought to be belonging to the external world or to someone else, or transference, which is the ‘shifting’ of an affect from one person to another or from one idea to another, either affection or hostility, based on unconscious identification, in order to deal with them at a subconscious level. Albert Adler described the inferiority complex as such – that a combination of emotionally charged feelings of inferiority operates in the unconscious to produce either timidity, or as a compensation, exaggerated aggression or paradoxical perception of superiority, and ones drives were a result of, or compensation for, feelings of inferiority derived from previous unpleasant experiences. For example, competing in extreme sport would be a compensation for being bullied in the past, or being abused as a child, or being ignored by a parent when young. Signs of such complexes included for Freud and Jung disturbing dreams and ‘slips of the tongue’, nervous tics and involuntary tremors, fanatical attachment to projects and goals, envy and dislike of individuals who are successful, falling apart when failing to successfully complete a challenge, desire for public acknowledgement and seeking of title and awards, compulsive exercising, and the development of neuroses and psychoses, all of which can be used to diagnosed the presence of ‘unsolved’ complexes, projections and transferences. Importantly for the development of neurasthenia (and chronic fatigue), Jung and Freud thought that there was an ‘energy cost’ to maintaining repressions and their associated complexes – Freud defined drives as the ‘psychical representative of the stimuli originating within the organism and reaching the mind, as a measure of the demand made for work in consequence of its connection to the body’ – and this energy cost eventually leads to the ‘breaking down of the will’ by the constant ‘fighting’ to maintain what was ‘hidden’ that was painful and not wanting to ‘come out’, and this breakdown of the will / ‘mental exhaustion’ lead to the signs and symptoms described above, which could in a circular way be used to diagnosed the presence of the underlying disorders. In a positive final observation, both Jung and Adler thought that the psyche was self-regulating, and that the development of these symptoms was purposive, and an attempt to ‘self-cure’ by compensation, and by bringing the destructive repressions, which exist at a subconscious level so are not directly perceived by the folk who have them, to their attention, or at least to that of their clinician or therapist, it would eventually lead to cure or at least ‘individuation’ and acknowledgement of the underlying issues, which to therapist of that era was the start of the cure.

Therefore, in this ‘id and ego’ model developed by Freud, Jung and their colleagues all those years ago, symptoms of chronic fatigue and burnout may be the psyche’s way of creating knowledge of and thereby attempting to cure latent psychic drives which lead to obsessive work or sporting goals and activity, created by past psychological trauma and a resultant ‘weak ego’, which results in chronic fatigue when the psyche cannot ‘cope’ with ‘fighting’ these often unperceived issues for a long period of time / for the life period up to the point when they collapse. Interestingly, while these theories have been mostly long forgotten or have fallen into disfavour, there has recently been an increase again in interest in the concept that mental and physical ‘energy’ is a finite commodity, with psychologist Roy Baumeister’s theory of ‘ego depletion’ gaining much traction recently, which suggests that a number of disorders of ‘self-regulation’, such as alcohol addiction, eating disorders and obesity, lack of exercise or excessive exercise, gambling problems and inability to save money and personal debt, may be related to one using up one’s ‘store of energy’ resisting the ‘deep’ urges which lead to these life imbalances, and eventually willpower decreases to a level where one cannot resist ‘doing’ them, or cannot raise the effort to continue resisting the desire to act out one’s wishes. In Baumeisters own words a tempting impulse may have some degree of strength, and so, to overcome it, the self must have a greater amount of strength, which can eventually be worn out or overcome, leading to adverse lifestyle choices in this ‘impaired mental energy state’. All lifestyle diseases and disorders may in his model therefore be related to an insufficiency of self-regulatory capacity, and there is an energy cost to resisting the ‘urges’ that lead to poor lifestyle choices, that may with time lead to either acute mental or physical fatigue, or in extreme cases to the development of chronic fatigue. Like with most contemporary psychology, the underlying reasons for such potential eventual failure of self-regulation were not deeply examined by Baumeister to the level that it was by Freud, Jung and colleagues, perhaps because so much of Freud, Jung and Adler’s theories are difficult to prove or disprove and therefore psychology and psychiatry have in the last few decades ‘turned against’ their theories and embraced neuroscience as having the best chance of understanding how the mechanisms underpinning self-regulation or the lack of it ‘work’, but neuroscience is currently far too ‘weak’ a discipline methodologically wise to be able to do such. Having said this, it is surely important that folk like Roy Baumeister are re-breaking such ground, and our understanding of such complex disorders such as CFS, and others such as fibromyalgia, which are also complex diagnostic dilemmas, is enhanced by the insight that mental energy ‘ego’ depletion may play a part in them. Sadly, there is evidence (described by Tracie Afifie and colleagues at Manitoba and MacMaster Universities) that folks who suffered physical or sexual abuse in childhood, or were exposed to between-parent physical violence at a young age, have an increased association with a number of chronic physical conditions (including arthritis, back problems, high blood pressure, migraine headaches, cancer, stroke, bowel disease, and significantly also CFS), and also a reduced self-perceived general health in adulthood, all of which would support the ‘ego and id’ psychopathology development theories of Freud and Jung to a degree, though of course surely not all folk who develop CFS have such childhood trauma issues.

Like the definition of neurasthenia and CFS, perhaps our understanding of their ‘deep causes’ is also moving in a ‘full circle’, and our knowledge of the underlying causes of CFS, if it does not have a specific organic or viral / toxic cause, needs to reconsider these basic concepts proposed by Jung, Freud and Adler more than one hundred years ago, and currently appears to be potentially re-occurring in a ‘repackaged’ version as suggested by Baumeister and his contemporaries theories in recent times. Perhaps the drive to keep on exercising that we found in all those athletes we examined in our studies at the University of Cape Town all those years ago was the key factor in the cause of their chronic fatigue, and was an ‘external’ manifestation of issues that they were not even aware of. We did not know enough about the subject back then to even ask them about it when we were trying to understand the causation of their symptoms. Perhaps a major component of CFS is mental exhaustion associated with continuously ‘fighting’ underlying past psychological trauma that the folk suffering from it are not even aware of, or at least this is part of the cause of the symptom complex along with other more organic or infective causes. Of course describing a disorder as either neurological or psychiatric is reductive, and indeed dualistic, and surely similar physical brain neural mechanisms underpin both ‘neurologic’ and ‘psychological’ disorders which we just cannot currently comprehend with the research techniques currently available. One has try to understand the reasons why one is ‘driven’ to do anything, particularly as one gets older and one’s physical (and perhaps mental) resources diminish and need to be ‘husbanded’ more carefully, though paradoxically CFS is a disorder which afflicts folk most often initially in their early twenties, and often ‘burns out’ / attenuates with increasing age, perhaps because part of growing older is often about understanding one’s issues to a greater degree, dealing with them, and living more ‘within one’s means’ all of materially, socially, physically, mentally and spiritually (although for some folk such learning never occurs). Aging may therefore be curative or protective from a CFS perspective (or one may die of ‘corollary damage’ such as heart attacks rather than developing CFS as a result of chronic stress related to unfulfilled drives).

Fatigue as a symptom is surely the body and mind ‘telling us’ that something is not ‘right’ and we need to rest – either acutely when we are doing sport, or chronically when we are ‘fighting’ something we do not understand or are aware of. The challenge is for us not just to rest, but to try and understand why we so often resist resting (well, those of us with complexes rather than those of us who are completely self-actuated and do not have stress or drives), and why life balance is so hard for many folk to find. The need (or unwanted requirement) for a prolonged rest / period of avoidance of one’s routine life / a ‘long sleep’ is often perhaps the last resort of those who are chronically fatigued and is nature’s way of ‘telling’ folk that they have ‘run out’ of responsive resources, and healing will not happen without it, though the healing may paradoxically be not of the fatigue itself, but of its underlying ‘deep’ causes. Now I am finished this its time to rest, and ponder what caused the need to write it in the first place, and why I have spent my holiday Easter period preparing for its writing, and ‘stoking the creative demon’ which never rests and which surely eventually damages one even as it creates, rather than just sitting in a coffee shop watching the world go by and thinking of nothing but how nice the next sip of coffee is sure to be. Demons of the past, away with you, before you lead to permanent mental and even physical damage, and tire folk out in the process!


Death and Our Own Dying – Death Related Anxiety And The Understanding Of Its Certainty Patterns Our Daily Life

In the last few weeks our family has had to come to terms with the fact that the health of our wonderful dog, Grauzer the Schnauzer, who whose been our faithful companion of eleven years, and has travelled with us from Cape Town in South Africa to Newcastle upon Tyne in the UK, and then back to Bloemfontein in South Africa, is failing, and as much as we don’t want it to happen and hope he lives on for a few years more, it is very likely in the next few weeks he will be taking the journey to the next world of unlimited lamp-posts and cats and ferrets that do not move as quickly as they do in this one. My old friend and world leading Sport and Exercise Scientist, Professor Andy Jones, last week retweeted some fascinating data of what folk die from during their lifetimes at different ages, and it was fascinating to see this and understand that one had got ‘safely past’ some of the childhood and early adult related cause of death, but that equally, now being well into middle-age, a whole host of nasty causes of death could potentially be one’s fate at any time from now into the future. My great brother, John, heard the unfortunate news that one of his school classmates had passed away of natural causes at the age of forty seven, and we soberly reflected over the Christmas period that ‘there by the grace of god went we’, and we resolved to pay more attention to our health and fitness, in the chance that this would make a difference and prolong for as long as possible into the future the inevitable fate which awaits all of us. All these got me thinking about death and dying, the biggest mystery of life, and perhaps the biggest factor at play in our lives and consideration of our future.

Death is defined as the final cessation of vital functions in an individual or organism which results in the ending of life. One’s death can be a result of a number of different phenomena, from senescence (biological ageing, or in more common terms, old age), disease, violence and murder, predation by wild animals, accidents, suicide, and any number of other mechanisms that potentially can cause one to die. While how exactly to define and diagnose the occurrence of death is still debated in medical circles, generally most folk would accept that someone has died when their heart and respiratory organs stop working and cannot be sustained without external artificial assistance, along with evidence of brain death as evidenced by a ‘flat-line’ EEG (which monitors the presence of rhythmical brain waves) and a lack of cortical function or primitive brain reflexes. When this happens, the body of any person or organism starts decaying and decomposing shortly after the onset of death. Interestingly, not all ‘living’ organisms die (the definition of what constitutes a ‘living’ organism is still hotly debated), with exceptions being the hydra and jellyfish species, which appear to be immortal and never die, and can maintain their existence ‘forever’ unless they are physically torn asunder. Similarly, organisms which reproduce asexually, and unicellular organisms, also appear to ‘live’ eternally. So one can postulate that death is a ‘by-product’ of a complex cellular structure, where somatic (body) cells are created in a complex arrangement by a combination of some ‘plan’ and some energy form, which allows the occurrence of ‘life’ as we know it, but decays with time and eventually deteriorates functionally to a degree that ‘death’ occurs.

What happens to us around the time and ‘after’ death is of course still a matter of conjecture. A fair bit of research has been done on folk who have had a near-death experience where they have been clinically ‘brought back from the dead’ after either a heart attack, or a near drowning, or accident related trauma, all which lead to hypoxia (shortage of oxygen supply) to and of the brain. Most describe a feeling that they are ‘blacking out’ for what to them is an unknown and unpredictable period of time, and an awareness that one is dying, until by chance / good medical practice they are ‘brought back to life’ by resuscitation and other clinical interventions. A lot of these folk also describe a sense of ‘being dead’, a sense of peace and wellbeing and painlessness, an out-of-body experience as if they were ‘floating’ above and ‘watching’ their physical self, a ‘tunnel experience’ of entering darkness via a tunnel of light, reviewing their life in a manner often described as ‘seeing their life history flashing before their eyes’, or seeing ‘beings of light’, all before the absolute darkness / nothingness of unconsciousness (‘death’) occurs, or they ‘return’ to their body as they are resuscitated. Of course all this is first person / qualitative descriptive information, and is impossible scientifically to replicate, but it is interesting that so many folk describe similar experiences as they ‘die’. We also do not know at all what then occurs after this phase, as all these folk are ‘brought back to life’, so we are not aware what happens ‘next’ as part of the death process. Into this knowledge void folk put their own interpretation of what happens, or will happen to them, when they do die – religious folk would describe and I guess hope for some type of ‘heaven’ as the ‘next phase’, or some transcendence or continuation of one’s ‘spirit’ or ‘soul’ into another body or as an entity which exists and ‘drifts’ through the ether eternally – while secular folk would either say they are not sure of what happens, or believe that there is nothing ‘after’ death, and everything just switches off and a blankness / nothingness occurs similar to when one is in a deep sleep. Of course all of these are pure conjecture, and it is for each of us to experience and understand what is ‘ahead’ for us after own deaths only when it happens, when all shall be made clear, or we will disappear into the eternity of nothingness, and know nothing about it or anything further of our life, past, current or future.

What is for sure is that for most folk, the thought of one’s own imminent or potential mortality causes anxiety (and I have never heard anyone say with any sincerity that they really are totally not scared of death and dying, and in those that do, it is almost always manifestly evident bravado), often to a morbid degree, where it is known as thanatophobia. Thanatophobia, or death anxiety, is defined as a feeling of dread, apprehension or anxiety when one thinks of the process of dying, or the totality of death, and its impact on one’s own ‘life’ which is all that we know and ‘have’. Death anxiety can be related to the fear of being harmed and the way one will die, or to existential fear that nothing may exist after we die, or to the fear of leaving behind loved ones and things and processes we believe are reliant on us for their continuation. It has been suggested that folk ‘defend’ themselves against the anxiety they feel about their own death and dying (or that of loved ones) by ‘denial’, which results in a lot of transference, acting out, or ‘covering up’ behaviour either consciously or subconsciously, such as attempting to acquire excessive wealth or power, or committing violence against others, or breaking rules and life boundaries, or celebrating / living life in a manic way, all of which have an emotional cost, and do not usually attenuate the underlying death anxiety. Interestingly, a century and more ago, most folk used to die in a more ‘open’ way than what currently occurs, usually in the comfort (or discomfort) of their homes, surrounded by their loved ones. In contrast, today a greater proportion of folk die in hospitals or hospices, ‘away’ from the ‘visible’ world, and it is usual for most folk never to see someone actually die in their lifetime until their own death is imminent. It has been postulated that this ‘hiding’ of death from us may have paradoxically created a greater fear of death because of us never ‘seeing’ or being involved with death, and therefore death is an ‘unknown’ entity or occurrence which causes an exacerbated fear due to the fear of the unknown nature of death, rather than just a fear of death itself. It has been suggested that folk with more physical problems, more psychological problems or a ‘lower ego integrity’ (lower self-confidence) suffer from greater death anxiety. Folk like Viktor Frankl have suggested that having some life ‘meaning’, or a sense of peace from achieving life goals, or paradoxically letting go of life goals, may attenuate the feelings of death anxiety. Supporting this, death anxiety is apparently greatest during the ages of 35-65 (and is felt by children as young as five years old), but after 65, again paradoxically, death anxiety appears to decrease, perhaps because after retirement one ‘lets go’ of earthly goals and desires, or reaches a sense of peace regarding one’s life and achievements. Of course there has to be a relationship between goals / desires and death anxiety for this to be true, and it is not clear if such a relationship clearly exists, even if it does seem to be logical ‘link’.

Folk that ‘give their life away’, whether in combat as part of a perception of national duty, or to save a family member, or in a life-threatening emergency where they react to such a situation and are prepared to die to save others, are challenging to understand in relation to the death anxiety and fear of death issues described above which most folk would admit to having. Clearly having a ‘higher cause’ must be valenced by these folk to be more important than their own life, or their lives must be perceived to be meaningless enough to ‘give it away’ in such instances. It is difficult to tell which of these (a perception of a higher cause or a meaningless life) is most germane in these different examples, and indeed whether these folk have a fear of death or anxiety about it, but continue with their course of action despite feeling such, or whether some cognitive process or learned way of thinking removes this fear / anxiety before they perform their last act of sacrifice or wilful death. Sigmund Freud suggested that there is a death drive in all folk, which opposes the ‘Eros’ drive (lust for life / breeding / survival), and that when folk want to die, or risk their life doing for example extreme sports like parachuting or mountain climbing where there is a high chance of death occurring, it is part of some primordial desire to ‘go back’ to some pre-life state, though of course a theory like this is difficult to prove or disprove as we cannot yet measure ‘drives’ in a direct way.

So how does knowledge of this anxiety related to the awareness of death as the final life process we will go through, and indeed of death itself, both affect and assist us with how we live our life? We do seem to either consciously or unconsciously create a ‘scaffold’ or pattern of our life plans and life stages related to the relative perceived imminence of death. For example in our twenties we explore ‘life’ with mostly a freedom from the fear of death (though paradoxically this exploratory behaviour often can end in accidental death), perhaps because one believes that one has many years of life ahead of one, and death will occur at a time far in the distance ahead. As one enters one’s thirties, one is for some reason to a greater degree confronted by an understanding of one’s mortality, perhaps due to early signs of physical deterioration such as not being able to compete as well as one used to at sport, or hair loss / developing baldness, or experiencing the death of one’s parents which ‘brings’ awareness of both the reality and finality of death to oneself, amongst many other potential reasons. Because of this one therefore starts ‘planning’ the life left ahead of one based on average mortality figures (most folk believe and hope they will live to between 70-80 years if things go well for them) – for example buying a house that will be paid off before one ‘retires’, having children at a young enough age to see them grow up to adulthood, or writing a will for the first time. The concept of retirement is interesting related to death and dying, and is surely based on a ‘calculation’ of a death age beyond the retirement age, thereby allowing one to have a little ‘down time’ / a time of peace before shuffling off this mortal coil, even though paradoxically health reasons often do not allow folk as much enjoyment of this time as they would if they rather planned a work ‘gap period’ in their forties or early fifties where they took time out from work to relax or travel, and subsequently worked on until death occurred, rather than waiting until being ‘old’ to enjoy retirement with the time left before their death. So a lot of our life appears to be patterned and planned out based on an understanding that a finite amount of years are available to us. This is perhaps why when one has a health scare, or a cancer diagnosis, or when a young person goes to war where the chance of death is manifestly increased at this ‘incorrect’ time of the person’s life, fear of death, death anxiety and denial mechanisms come into play, that can be very difficult to attenuate or ‘put out’ of one’s mind.

As much as one would like to, as the main character in the film ‘Lawless’ concluded after his much revered brother, who he thought was immortal, died at the end of the film, no-one leaves this world alive. Understanding this creates a sense of anxiety in us (unless we perhaps have strong religious beliefs), both for what we will lose, and for what we will leave behind. But, paradoxically, the thought of death perhaps also creates a sense of wonder each day we wake up that we are indeed alive for another day, and makes the grass seem greener, the sun shine brighter, and the water seem wetter, given we know that one day we will no longer ‘have’ all these things around us. Once in my youth I capsized when paddling down a river in my kayak and was pinned under a rock for a period of time, and had that ‘out of body’ feeling described above, and my whole life to that point played out in a fast sequential ‘movie’ in front of my eyes, and then I felt everything go black and remembered nothing more. Fortunately I was ‘let go’ / washed out from under the rock, and when I regained my senses everything did indeed seem much sweeter, lusher, brighter, and more brilliant, and does still to this day. I am of the age when according to the statistics I should most fear death, and indeed, with a young family, each day I do fear that I will not see my son and daughter grow up if I die suddenly. I held my wonderful dog Grauzer in my arms as I brought him home from the vet this morning with the news that I might not be able to hold him such for much longer, and a feeling of immense sadness and impending loss almost overwhelmed me. But then I thought about the good times we have had together, and understood that the circle of life, which for him is nearly complete, was and is a full and happy one, and I understood also that part of my sadness for him is my fear for my own mortality and the sense of permanence that accompanies his impending death. I took note of the fact that, as described above, at the end of one’s life the fear of death is usually paradoxically attenuated and lessens, and hoped that dogs get to that similar point of peace at the end of their time too. And yes, his fur does feel softer, his wagging tail and uplifted ‘happy’ face each time he sees me seems even ‘sweeter’, given I know that soon he will go forever into the great unknown, and will be with us no more. Death and dying is still the greatest mystery life has for us, and a challenge we all have to go through on our own, and we will only gain the knowledge of what death is ‘about’ when we go through the dying process ourselves. When eventually facing one’s own imminent death, perhaps the best one can do is try and find the courage to meet it ‘head on’, as suggested in the wonderful words of the Nick Glennie-Smith song, ‘Sgt. Mackenzie’, written in homage to his grandfather who died in the first Great war – ‘Lay me down in the cold, cold ground, where before many men have gone. When they come, I will stand my ground, and not be afraid’ – though of course we all hope that the need to do this will occur many years from now, with all our loved ones around us, and with the contentment of a life well lived in our final moments. But we can be sure of one thing, and that is we will never get out of this world alive, unless we are an amoeba or jellyfish. And maybe, just maybe, the world is a better place because of this, or at least it feels such in those moments when we ponder on the glory of life, with the aching awareness that at some future point in time we no longer will be ‘in it’, and will go off on our own journey, alone, into the great big, wide, unknown.


Low Carb High Fat Banting Diets And Appetite Regulation – A Research Area Of Complex Causation Appears To Have Brought Out A Veritable Mad Hatters Tea Party Ensemble

Perhaps one of the astonishing things I have read in my career to date was a recent Tweet apparently written by my own previous lab boss of University of Cape Town days, now many years ago, Professor Tim Noakes. The text of this tweet included ‘Hitler was vegetarian, Wellington (Beef), Napoleon insulin resistant – Did LCHF determine future of Europe’. Tim has, in the last few years, endorsed the Low Carb / High Fat (LCHF) ‘Banting’ Diet as the salvation and ‘holy grail’ of healthy living and longevity, and appears to have recommended that everyone from athletes to children should follow the diet. As part of this diet, if I have heard / read him correctly, sugar (carbohydrate) is the ‘great evil’ and has an addictive capacity, our ancestors lived on a diet high in fat and low in carbohydrates and were as a result, according to Tim, more healthy than us contemporary folk, and our current diabetes and obesity epidemics are linked to an increase intake of sugar (but not fats, proteins or simply an absolute increase in caloric intake / portion size) in the last few decades, related to a variety of factors. All this has been astonishing to me, given that for many years when I worked in Tim’s lab, he was a strong proponent of carbohydrates / sugars as the ‘ultimate fuel source’ and wrote extensively on this, and we did a number of trials examining the potential benefits of carbohydrates which were funded by sugar / carbohydrate producing companies. While anyone can have a paradigm shift, this is one of great proportions, and given that I worked closely with Tim for a number of years (we have co-authored more than 50 research papers together, mostly in the field of activity regulation mechanisms), I have found this one, and some of the statements like in the Tweet above, to be, put conservatively, astonishing. So perhaps it would be interesting to look at some of the points raised by the folk that champion the LCHF diet and whether they have any veracity.

Firstly, one of the basic tenets of the diet is that our ancestors in pre-historic times used to use a LCHF diet and were as a result healthier because of it. Of course it is almost impossible to say with any clarity what folk ate beyond a few generations back, given that we have to rely in the period since writing started on folks written observations of what they ate, and before that, on absolutely no empirical evidence at all, apart from sociological speculation. The obvious counter-argument is that the life span has increased dramatically in the last few centuries, so while mortality rates are always multifactorial, to say that a diet used in the ancient past was beneficial is clearly difficult to accept when folk died so much younger than they did today, or that they were more healthy or lean in pre-historic days. As pointed out by Professor Johan Koeslag in my medical training days, based on the figurine the Venus of Willendorf, created in 24000-22000 BC, which depicted a female who was obese, it is as likely that folk back then were obese as it could be that they were thin. But the point is that to make any argument based on hypotheses of what was done in ancient times is specious, as we just cannot tell with any certainty what folk ate then, and it is likely that folk in ancient times ate whatever they could find, whether it was animal or plant based, in order to survive.

Based on this ‘caveman’ ideal, as nebulous as it is, the LCHF proponents have suggested that it is more ‘natural’ for the body to ‘run’ on a low carbohydrate diet, and Tim has suggested that athletes will perform better on a LCHF diet. But perhaps one of the best studies that would negate this concept was performed by my old friend and colleague, Dr Julia Goedecke, of which both Tim and I were co-authors. Julia looked at what fuels folk’s metabolism naturally ‘burnt’ as part of their metabolic profile, and found that there were some folk who were preferential ‘fat burners’ (and would perhaps do well on a high fat diet), some who were preferential ‘carbohydrate burners’ (and would perhaps do best on a high carbohydrate diet) but the large majority of folks were ‘in between’, and burnt both carbohydrates and fats as their selected fuel. If you are a ‘fat burner’ and ate carbohydrates, you may run into ‘trouble’, as equally if you are a ‘carbohdyrate burner’ and ate fats you may run into trouble similarly, but again, most folk ‘burn’ a combination of both, and the obvious inference would be that most folk would do best on a balanced diet (and of course without huge lifelong cohort studies one cannot say what ‘trouble’ either group will run into health-wise without such data).

It has also been suggested by Tim and the LCHF proponents that sugars / carbohydrates are highly addictive, and it is specifically the ingestion of this particular food source that has led to increase levels of obesity and health disorders such as type 2 diabetes in the last few decades. But, absolute caloric intake has increased over the last few decades, so a simple increase in portion sizes and overall food ingestion should surely be a prime suspect in the increase levels of obesity described. It’s likely also that high fat foods are also potentially as ‘addictive’ as sugars / carbohydrates are, if they are indeed such, and folk may also be as likely to be addicted to eating per se, rather than specifically addicted to one food type of the food they eat. The causes of an increase in appetite and the sensation of hunger is an incredibly complex field – a hundred years ago it was apparently suggested that when the walls of an empty stomach rub against each other, it causes the sensation of hunger to be stimulated. But, we have more understanding now of these processes (though still a lot to learn), and the signals controlling hunger are incredibly complex, including hormone signallers arising from the gut (such as leptin and ghrelin) that go up to the brain (principally the hypothalamus) and which induce eating focussed behaviour and activity, and these are responsive to a wide variety of food types ingested. But even suggesting that one type of food and addiction to it is the cause of obesity is manifestly absurd, given how many other reasons could be suggested to be involved in eating patterns and food choices – for example the social aspect to eating food, the community habits of different populations of folk associated with eating patterns, and the psychological needs and issues associated with eating that go beyond simple fuel requirements and fuel dynamics, let alone genetics and innate predisposition to obesity and an obese somatotype some folk inherit from their parents. To note also that weight gain is not just related to single episodes of food ingestion, and some fantastic work from old colleagues from my time at Northumbria University, Dr Penny Rumbold, Dr Caroline Reynolds and Professor Emma Stevenson, amongst others, has shown that eating habits and weight gain are monitored and adjusted over long time periods in an incredibly complex way, by mechanisms that are not well understand, and it is in understanding these long term regulatory mechanisms that the changes in weight gain we see both in individuals and societies over time will surely be best understood, rather than ‘blaming’ one type of specific food group and its marketing to the public as a food type. As has been pointed out to me by my old (and much respected) academic ‘sparring partner’, Dr Samuele Marcora, both low carbohydrate and low fat diets can be successful in initiating weight loss – but equally, both types of diets are shown to be very difficult to maintain (as are all diets) – one so often ‘falls off’ diets because these inherent, complex food intake regulatory mechanisms are pretty ‘strong’ and perhaps difficult to change.

One of the most controversial issues is the effect of LCHF / Banting diets on either optimising or damaging health, and the jury is still very much out on this, and will be until we have big cohort long term morbidity and mortality statistics of folk on the LCHF diets for prolonged periods of time. There are a lot of studies that show that eating too many carbohydrates increases morbidity and has a negative effect on health. But there are also a lot of studies that show a high fat intake also has a negative effect on one’s health. Similarly for high caloric diets, and yet also similar increases in morbidity in diets deficient in one type of food type, or indeed, very low caloric diets. So it is also difficult to get a clear picture from scientific studies exactly what diet works or is optimal – my ‘gut feel’, to excuse the pun, would be that a prudent, balanced diet will surely offer the best alternative, though with the rider as evident from Julia’s study, that some folk will do better on a higher carbohydrate percentage diet, and some with a higher fat percentage diet. There are some other interesting confounding issues, such as what is known as the survival paradox, where folks with moderate levels of obesity do ‘better’ than their thinner counterparts in some age related disease mortality rates – particularly apparently in folks once they get over 70 years of age, when obesity may paradoxically become protective rather than pathological. A point has also been raised that there are increasing levels of people with appetite disorders and body image disorders in the last few decades too (such as anorexia nervosa, bulimia and muscle dysmorphia, amongst others), and while the genesis of these appetite related disorders is also incredibly complex, diets such as LCHF, like many other very rigidly defined diets with specific eating requirements, may be propagating the capacity for such disorders to flourish, and indeed, a number of the ‘zealots’ who ‘convert’ to such diets and stick to them ‘through thick and thin’, may have appetite related disorders and are able to ‘use’ the camouflage of sticking to a LCHF diet to ‘mask’ a latent eating disorder. I can’t comment on the veracity of this suggestion, without seeing more research on it, but my ‘gut feel’ again is that there may be something like this.

Eating patterns and dietary choices, and their relationship to health, are surely some of the most complex and multifactorial areas of research that there can ever be in science. Because of this it is so hard to find and do good science that can give a clear indication of the ‘best’ diet or eating pattern for any one person, and most science in the field concentrates on one food type or one outcome of specific food type ingestion, and makes conclusions based on their results that are well intentioned, but always succumb to the problem of the complexity of the human and social dynamics associated with what and how much folk eat, that is perhaps impossible ever to reduce to a single laboratory or even field based experimental protocols. Because of this (and the fact that people need to eat on a daily basis to survive, so in effect everyone is a ‘captive audience’ for and to information), it is a field which is susceptible to anyone ‘getting up on a soap-box’ and putting their ‘five cents’ into the debate, and with modern communication methods available to us like blogs and the social media channels currently available, these opinions can spread rapidly and be taken as ‘gospel’ in a very short period of time. When someone, whom I respected so much as Tim Noakes, and with whom I have published so prodigiously together as a co-author in the past (though not in the field of LCHF / Banting diets), starts ‘banging off’ with tweets such as the above about the future of Europe potentially being determined by folk eating a LCHF diet or not (part of me is sure that Tim, if he did write this, perhaps did so in jest, or it was written as a ‘spoof’, as it is such a ‘left field’ post), I do wonder whether the field of nutrition, and those interested in it has become something of a ‘mad hatters tea party’ (though of course I have great respect for the large majority of my nutritionist colleagues). Surely like all diets, the LCHF / Banting diet will fade away as people find it hard to stick to it, as a new diet fad is announced and takes its place, and as science ‘chips’ away at some of the astonishing claims for its veracity made by its proponents. Surely in the end a balanced diet, like a balanced anything, will ultimately prevail as the diet ‘champion’. Until then, March Hare or Mad Hatter, whoever of you is pouring the tea, can I please have two spoons of sugar in my tea. If having such prevents me from ruling Europe, or dominating the world, so be it!


%d bloggers like this: