Category Archives: Social

Contemporary Medical Training And Societal Medical Requirements – How Does One Balance The Manifest Need for General Practitioners With Modern Super-Specialist Trends

For all of my career, since starting as a medical student at the University of Cape Town as an 18 year old fresh out of school many years ago, I have been involved in the medical and health provision and training world, and have had a wonderful career first as a clinician, then as a research scientist, then in the last number of years managing and leading health science and medical school research and training. Because of this background and career, I have always pondered long and hard about what makes a good clinician, what is the best training to make a good clinician, how we define what a ‘good’ clinician is, and how we best align the skills of the clinicians we train with the needs and requirements of the country’s social and health environments in which they trained. A few weeks ago I had a health scare which was treated rapidly and successfully by a super-specialist cardiologist, and I was home the next day after the intervention, and ‘hale and hearty’ a few days after the procedure. If I had lived 50 years ago, and it had happened then, in the absence of modern high-tech equipment and the super-specialists skills, I would probably have died a slow and uncomfortable death treated with drugs of doubtful efficacy that would not have much benefited me much, let alone treat the condition I was suffering from. Conversely, despite my great respect for these super-specialist skills which helped me so successfully a few weeks ago, it has become increasingly obvious that this great success in clinical specialist training has come at the cost of reduced emphasis on general practitioner-focused training, and a reduction in the number of medical students choosing general practitioner work as a career after they qualify, which has caused problems both to clinical service delivery in a number of countries, particularly in rural areas of countries, and paradoxically put greater strain on specialist services despite their pre-eminence in contemporary clinical practice in most countries around the world. My own experience with grappling with this problem of how to increase general practitioners as an outcome of our training programs, as a Head of School of Medicine previously, and this recent health scare which was treated so successfully by super-specialist intervention, got me thinking of how best we can manage the contradictory requirements of the need for both general practitioners and specialists in contemporary society, and whether this conundrum should be best managed by medical schools, health and hospital management boards, or government-led strategic development planning initiatives.

It is perhaps not surprising, given the exponential development of technological innovations that originated in the industrial revolution and which changed how we live, that medical work also changed and became more technologically focused, which in turn required both increased time and increased specialization of clinical training to utilize these developing technologies, such as surgical, radiology investigative and laboratory-based diagnostic techniques. The hospital (Groote Schuur) and medical school (University of Cape Town) where I was trained was famous for the achievements of Professor Chris Barnard and his team’s work performing the first heart transplant there, using a host of advanced surgical techniques, heart-lung machines to keep the patients alive without a heart for a brief period of time, and state-of-the-art immunotherapy techniques to resist heart rejection, all specialist techniques he and his team took many years to master in some great medical schools and hospitals in the USA. Perhaps in part because of this, our training was very ‘high-tech’, consisting of early years spent learning basic anatomy, physiology and pathology-based science, and then later years spent in surgical, medical, and other clinical specialty wards, mostly watching and learning from observation of clinical specialists going about their business treating patients. If I remember it correctly, there were only a few weeks of community-based clinical educational learning, very little integrative ‘holistic’ patient-based learning, and almost no ‘soft-skill’ training, such as optimal communication with patients, working as part of a team with other health care workers such as nurses and physiotherapists, or learning to help patients in their daily home environment and social infrastructure. There was also almost no training whatsoever in the benefits of ‘exercise as medicine’, or of the concept of wellness (where one focuses on keeping folk healthy before they get ill, rather than dealing with the consequences of illness). This type of ‘specialist-focused’ training was common, particularly in Western countries, for most of the last fifty or so years, and as a typical product of this specialist training system, for example, I chose first clinical research and then basic research rather than more patient-focused work as my career choice, and a number of my colleagues from my University of Cape Town medical training class of 1990 have had superb careers as super-specialists in top clinical institutions and hospitals all around the world.

This increasing specialization of clinical training and practice, such as the example of my own medical training described above, has unfortunately had a negative impact both on general practitioner numbers and primary care capacity. A general practitioner (GP) is defined as a medical doctor who treats acute and chronic illnesses and provides preventative care and health education to patients, and who has a holistic approach to clinical practice that takes all of biological, social and psychological factors into consideration when treating patients. Primary care is defined as the day-to-day healthcare of patients and communities, with the primary care providers (GP’s, nurses, health associates or social workers, amongst others) usually being the first contact point for patients, referring patients on to specialist care (in secondary or tertiary care hospitals), and coordinating and managing the long term treatment of patient health after discharge from either secondary or tertiary care if it is needed. In the ‘old days’, GP’s used to work in their community often where they were born and raised, worked 24 hours a day as needed, and maintained their relationship with their patients through most or all of their lives. Unfortunately, for a variety of reasons, GP work has changed, and they now often work set hours, patients are rotated through different GP’s in a practice, and the number of graduating doctors choosing to be GP’s is diminishing, and there is an increasing shortage of GP’s in communities and particularly rural areas of most countries as a result. Sadly, GP work is often regarded as being of lower prestige than specialist work, the pay for GP’s has often been lower than that of specialists, and with the decreased absolute number of GPs, the work burden on many GP’s has increased (and paradoxically with computers and electronic facilities the note and recording taking requirements of GP’s appears to have increased rather than decreased) leading to increased level of burnout and GP’s choosing to turn to other clinical roles or to leave the medical profession completely, which exacerbates the GP shortage problem in a circular manner. Training of GP’s has also evolved into specialty-type training, with doctors having to spend 3-5 years ‘specializing’ as a GP (often today called Family Practitioners or Community Health Doctors), and this also has paradoxically potentially put folk off a GP career, and lengthens the time required before folk intent on becoming GP’s can do so and become board certified / capable of entering or starting a clinical GP practice. As the number of GP’s decrease, it means more folk go directly to hospital casualties as their first ‘port of call’ when ill, and this puts a greater burden on hospitals, which somewhat ironically also creates an increased burden on specialists, who mostly work in such hospitals, and who end up seeing more of these folk who could often be treated very capably by GP’s. This paradoxically allows specialists less time to do the specialist and super-specialist roles they spent so many years training for, with the result that waiting list and times for ‘cold’ (non-emergency) cases increases, and hospital patient care suffers due to patient volume overload.

At a number of levels of strategic management of medical training and physician supply planning, there have been moves to counter this super-specialist focus of training and to encourage folk to consider GP training as an appealing career option. The Royal College of Physicians and Surgeons of Canada produced a strategic clinical training document (known as the ‘CanMeds’ training charter) which emphasizes that rather than just training pure clinical skills, contemporary training of clinical doctors should aim to create graduates who are all of medical experts, communicators, collaborators, managers, health advocates, scholars and professionals – in other words a far more ‘gestalt’ and ‘holistically’ trained medical graduate. This CanMeds document has created ‘waves’ in the medical training community, and is now used by many medical schools around the world now as their training ‘template’. Timothy Smith, senior staff writer for the American Medical Association, published an interesting article recently where he suggested that similar changes were occurring in the top medical schools in the USA, with clinical training including earlier exposure to patient care, more focus on health systems and sciences (including wellness and ‘exercise is medicine’ programs), shorter time to training completion and increased emphasis on using new communication technologies more effectively as part of training. In my last role as Head of the School of Medicine at the University of the Free State, working with Faculty Dean Professor Gert Van Zyl, Medical Program Director Dr Lynette Van Der Merwe, Head of Family Medicine Professor Nathanial Mofolo, Professor Hanneke Brits, Dr Dirk Hagemeister, and a host of other great clinicians and administrators working at the University or the Free State Department of Health, the focus on the training program was shifted to try to include a greater degree of community based education as a ‘spine’ of training rather than as a two week block in isolation, along with a greater degree of inter-professional education (working with nurses, physiotherapists, and other allied health workers in teams as part of training to learn to treat a patient in their ‘entirety’ rather than as just a single clinical ‘problem’), and an increased training of ‘soft skills’ that would assist medical graduates not only with optimal long term patient care, but also with skills such as financial and business management capacity so that they would be able to run practices optimally, or at least know when to call in experts to assist them with non-clinical work requirements, amongst a host of other innovative changes. We, like many other Universities, also realized that it was important to try and recruit medical students from the local communities around the medical school in which they grew up, and to encourage as many of these locally based students as possible to apply for medical training, though of course selection of medical students is always a ‘hornets nest’, and it is very challenging to get it right balancing marks, essential skills and community needs of the many thousands of aspirant clinicians who wish to do medicine when so few places are available to offer them.

All of these medical training initiatives to try and initiate changes of what has become a potentially ‘skewed’ training system, as described above, are of course ‘straw in the wind’ without government backing and good strategic planning and communication by country-wide health boards, medical professional councils, and hospital administrators who manage staffing appointments and recruitment. As much as one needs to change the ‘focus’ and skills of medical graduates, the health structures of a country need to be similarly changed to be ‘focused’ on community needs and requirements, and aligned with the medical training program initiatives, for the changes to be beneficial and to succeed. Such training program changes and community based intervention initiatives have substantial associated costs which need to be funded, and therefore there is a large political component to both clinical training and health provision. In order to strategically improve the status quo, governments can choose to either encourage existing medical schools to increase student numbers and encourage statutory clinical training bodies to enact changes to the required medical curriculum to make it more GP focused, or build more medical schools to generate a greater number of potential GP’s. They can also pay GP’s higher salaries, particularly if they work in rural communities, or ensure better conditions of service and increased numbers of allied health practitioners and health assistants to lighten the stress placed on GP’s, in order to ensure that optimal community clinical facilities and health care provision is provided for. But, how this is enacted is always challenging, given that different political parties usually have different visions and strategies for health, and changes occur each time a new political party is elected, which often ‘hinders’ rather than ‘enacts’ required health-related legislation, or as in the case of contemporary USA politics, attempts to rescind previous change related healthcare acts if they were enacted by an opposition political party. There is also competition between Universities which have medical schools for increases in medical places in their programs (which result in more funding flowing in to the Universities if they take more students) and of course any University that wishes to open a new medical school (as my current employers, the University of Waikato wish too, and who have developed an exciting new community focused medical school strategic plan that fulfills all the criteria of what a contemporary focused GP training program should be, that will surely become an exemplary new medical school if their plan is approved by the government) is regarded as a competition for resources by those Universities who already run medical training programs and medical schools. Because of these competition-related and political issues, many major health-related change initiatives for both medical training programs and the related community and state structural training requirements are extremely challenging to enact, and are why so many planned changes become ‘bogged down’ by factional lobbying either before they start or when they are being enacted. This is often disastrous for health provision and training, as chaos ensues when a ‘half-changed’ system becomes ‘stuck’ or a new political regime or health authority attempts to impose further, often ‘half-baked’ changes on the already ‘half-changed’ system, which results in an almost unmanageable ‘mess’ which is sadly often the state of many countries medical training, physician supply, and health facilities, to the detriment both of patients and communities which they are meant to serve and support.

The way forward for clinical medical training and physician supply is therefore complex and fraught with challenges. But, having said this, it is clear that changes are needed, and brave folk with visionary thinking and strategic planning capacity are required to both create sound plans that integrate all the required changes across multiple sectors that are needed for the medical training changes to be able to occur, and to enact them in the presence of opposition and resistance, which is always the case in the highly politicized world of health and medical training. Two good examples of success stories in this field were the changes to the USA health and medical training system which occurred as a result of the Flexner report of 1910, which set out guidelines for medical training throughout the USA, which were actually enacted and came to fruition, and the development of the NHS system in the UK in the late 1940’s, which occurred as a result of the Beveridge report of 1942, which laid out how and why comprehensive, universal and free medical services were required in the UK, and how these were to be created and managed, and these recommendations were enacted by Clement Attlee, Aneurin Bevin and other members of the Labour government of that time. Both systems worked for a time, but sadly both in the USA and UK, due to multiple reasons and perhaps natural system entropy, both of these countries health services are currently in a state of relative ‘disrepair’, and it is obvious that major changes to them are again needed, and perhaps an entire fresh approach to healthcare provision and training similar to that initiated by the Flexner and Beveridge reports are required. However, it is challenging to see this happening in contemporary times with the polarized political status currently occurring in both countries, and strong and brave health leadership is surely required at this point in time in these countries, as always, in order to initiate the substantial strategic requirements which are required to either ‘fix’ each system or create an entirely new model of health provision and training. Each country in the world has different health provision models and medical training systems, which work with varying degrees of success. Cuba is an example of one country that has enacted wholesale GP training and community medicine as the centerpiece of both their training and health provision, though some folk would argue that they have gone too far in this regard in their training, as specialist provision and access is almost non-existent there. Therein lies an important ‘rub’ – clearly there is a need for more GP and community focused medical training. But equally, it is surely important that there is still a strong ‘flow’ of specialists and super-specialists to both train the GP’s in the specific skills of each different discipline of medicine, and to treat those diseases and disorders which require specialist-level technical skills. My own recent health scare exemplifies the ‘yin and yang’ of these conflicting but mutually beneficial / synergistic requirements. If it were not for the presence of a super-specialist with exceptional technical skills, I might not be alive today. Equally the first person I phoned when I noted concerning symptoms was not a super-specialist, but rather was my old friend and highly skilled GP colleague from my medical training days, Dr Chris Douie, who lives close by to us and who responded to my request for assistance immediately. Chris got the diagnosis spot on, recommended the exact appropriate intervention, and sent me on to the required super-specialist, and was there for me not just to give me a clinical diagnosis but also to provide pastoral care – in other words ‘hold my hand’ and show me the empathy that is so needed by any person when they have an unexpected medical crisis. In short, Chris was brilliant in everything he did as first ‘port of call’, and while I eventually required super-specialist treatment of the actual condition, in his role as GP (and friend) he provided that vital first phase support and diagnosis, and non-clinical empathic support, which is so needed by folk when they are ill (indeed historically the local GP was not just everyone’s doctor but also often their friend). My own example therefore emphasizes this dual requirement for both GP and specialist health provision and capacity.

Like most things, medical training and health care provision has like a pendulum ‘swung’ between specialist and generalist requirements and pressures in the last century. The contemporary perception, in an almost ‘back to the future’ way, is that we have perhaps become too focused on high technology clinical skills and training (though as above there will always be a place and need for these), and we need more of our doctors to be trained to be like their predecessors of many years ago, working out in the community, caring for their patients and creating an enduring life-long relationship with them, and dealing with their problems early and effectively before they become life-threatening and costly to treat and requiring the intervention of expensive specialist care. It’s an exciting period of potential world-wide changes in medical training and the clinical health provision to communities, and a great time to be involved in either developing the strategy for medical training and health provision and / or enacting it – if the folk involved in doing so are left in peace by the lobby groups, politicians and folk who want to maintain the current unbalanced status quo due to their own self-serving interests. Who knows, maybe even clinicians, like in the old days, will be paid again by their patients with a chicken, or a loaf of freshly baked bread, and goodwill will again be the bond between the community, the folk who live in them, and the doctors and healthcare workers that treat them. And for my old GP friend Chris Douie, who is surely the absolute positive example and role model of the type of doctor we need to be training, a chicken will heading his way soon from me, in lieu of payment for potentially saving my life, and for doing so in such a kind and empathetic way, as surely any GP worth his or her ‘salt’ would and should do!


Muscle Dysmorphia And The Adonis Complex – Mirror, Mirror On The Wall, Why Am I Not The Biggest Of Them All

I have noticed recently that my wonderful son Luke, who is in the pre-teenage years, has become more ‘aware’ of his body and discusses things like ‘six-pack abs’ and the need to be strong and have big muscles, probably like most boys of his age. I remember an old colleague at the University of Free State mention to me that her son, who was starting his last year at school, and who was a naturally good sports-person, had started supplementing his sport with gym work as he perceived that ‘all boys his age were interested in having big muscles’, as my colleague described it. A few decades ago, my old colleague and friend Mike Lambert, exercise physiologist and scientist without peer, and I did some work researching the effect of anabolic steroid use on bodybuilders, and noted that there were not just physical but also psychological changes in some of the trial participants. I did a fair amount of time in the gym in my University days, and always wondered why some of the biggest folk in the gym seemed to do their workouts with long pants and tracksuit tops, sometimes with hoods up, even on hot days, and how in conversation with them I was often told that despite them being enormous (muscular rather than obese-wise), they felt that they were small compared to their fellow bodybuilders and weightlifters, and that they needed to work harder and longer in the gym than they were currently doing to get results. All of these got me thinking of the fascinating syndrome known as muscle dysmorphia, also known as the Adonis complex, ‘bigorexia’, or ‘reverse anorexia’ and what causes the syndrome / disorder in the folk that develop it.

Muscle dysmorphia is a disorder mostly affecting males (though females can also be affected) where there is a belief or delusion that one’s body is too small, thin, insufficiently muscular or lean, despite it often being normal or exceptionally large and muscular, and related to obsessional efforts to increase muscularity and muscle mass by weightlifting exercise routines, dietary regimens and supplements, and often anabolic steroid use. This perception of being not muscular enough becomes severely distressing for the folk suffering from the syndrome, and the desire to enhance their muscularity eventually impacts negatively on the sufferer’s daily life, work and social interactions. The symptoms usually begin in early adulthood, and are most prevalent in body-builders, weight-lifters, and strength-based sports participants (up to 50 percent in some bodybuilder population studies, for example). Worryingly, muscle dysmorphia is increasingly being diagnosed in younger / adolescent folks, and across the spectrum of sports participants, and even in young folk who begin lifting weights for aesthetic rather than sport-specific purposes, and who from the start perceive they need to go to gym to improve their ‘body beautiful’. Two old academic friends of mine, Dave Tod and David Lavallee, published an excellent article on muscle dysmorphia a few years ago, where they suggested that the diagnostic criteria for the disorder are that the sufferer needs to be pre-occupied with the notion that their bodies are insufficiently lean and muscular, and that the preoccupation needs to cause distress or impairment in social or occupational function, including at least two of the four following criteria: 1) they give up / excuse themselves from social, occupational or recreational activities because of the need to maintain workout and diet schedules; 2) they avoid situations where their bodies may be exposed to others, or ‘endure’ such situations with distress or anxiety; 3) their concerns about their body cause distress or impairment in social, occupational or other areas of their daily functioning; and 4) they continue to exercise and monitor their diet excessively, or use physique-enhancing supplements or drugs such as anabolic steroids, despite knowledge of potential adverse physical or psychological consequences of these activities. Folk with muscle dysmorphia spend a lot of their time agonizing over their ‘situation’, even if it is in their mind rather than reality, look at their physiques in the mirror often, and are always of the feeling that they are smaller or weaker than what they really are, so there is clearly some cognitive dissonance / body image problem occurring in them.

What causes muscle dysmorphia is still not completely known, but what is telling is that it was first observed as a disorder in the late 1980’s and early 1990’s, and was first defined as such by Harrison Pope, Katharine Phillips, Roberto Olivardia and colleagues in a seminal publication of their work on it in 1997. There are no known reports of this disorder from earlier times, and as suggested by these academics, it’s increasing development appears to be related a growing social obsession with ‘maleness’ and muscularity, that is evident in the media and marketing adverts of and for the ‘ideal’ male in the last few decades. While women have had relentless pressure on them from the concept of increasing ‘thinness’ as the ‘ideal body’ perspective for perhaps a century or longer from a social media perspective, with for example the body size of female models and advertised clothes sizes decreasing over the years (and it has been suggested that in part this is responsible for the increase in the prevalence in anorexia nervosa in females), it appears that males are now under the same marketing / media ‘spotlight’, but more from a muscularity rather than a ‘thinness’ perspective, with magazines, newspapers and social media often ‘punting’ this muscular ‘body ideal’ for males when selling male-targeted health and beauty products. Some interesting changes have occurred which appear to support this concept, for example the physique of GI-Joe toys for young boys changing completely in the last few decades, apparently being much more muscular in the last decade or two compared to their 1970 prototypes. Matching this change, in 1972 only 15-20 percent of young men disliked their body image, while in 2000 approximately 50% percent of young men disliked their body image. Contemporary young men (though older men may also be becoming increasingly ‘caught up’ in similar desire for muscularity as contemporary culture puts a price on the ‘body beautiful’ right through the life cycle) perceive that they would like to have 13 kg more muscle mass on average, and believe that women would prefer them to have 14 kg more muscle mass to be most desirable, though interestingly when women were asked about this, women were happy with the current mass of their partners, and many were indeed not attracted to heavily-muscled males. Therefore, it appears that social pressure may play a large part in creating an environment where men perceive their bodies in a negative light, and this may in turn lead to the development of a ‘full blown’ muscle dysmorphia syndrome in some folk.

While the concept that social pressure plays a big role in the development of muscle dysmorphia, other factors have also been suggested to play a part. Muscle dysmorphia is suggested to be associated with, or indeed a sub-type of, the more general body dysmorphic disorder (and anorexia nervosa, though of course anorexia nervosa is about weight loss, rather than weight gain), where folk develop a pathological dislike of one or several body parts or components of their appearance, and develop a preoccupation with hiding or attempting to fix their perceived body flaw, often with cosmetic surgery (and this apparently affects up to 3 percent of the population). It has been suggested that both muscle dysmorphia and body dysmorphic disorder may be caused by a problem of ‘somatoperception’ (knowing one’s own body), which may be related to organic lesions or processing issues in the right parietal lobe of the brain, which is suggested to be the important area of the brain for own-body perception and the sense of self. In folk that have lesions of the right parietal cortex, they perceive themselves to be ‘outside’ of their body (autoscopy), or that body parts are missing / there is a lack of awareness of the existence of parts of the body (asomatognosia). Non-organic / psychological factors have also been associated with muscle dysmorphia, apart from media and socio-cultural influences, including being a victim of childhood bullying, being teased about levels of muscularity when young, or being exposed to violence in their family environment. It has also been suggested that it is associated with appearance-based rejection sensitivity, which is defined as anxiety-causing expectations of social rejection based on physical appearance – in other words, for some reason, folk with muscle dysmorphia are anxious that they will be socially rejected due to their perceived lack of muscularity and associated appearance deficits. Whether this rejection sensitivity is due to prior negative social interactions, or episodes of childhood teasing or body shaming, has not been well elicited. Interestingly, while studies have reported inconclusive correlations with body mass index, body fat, height, weight, and pubertal development age, there have been strong correlations reported with mood disorders, anxiety disorders, perfectionism, substance abuse, and eating and exercise-dependence / addiction disorders, as well as with the clinical depression, anxiety, and obsessive-compulsive disorders. There does not appear to be a strong relationship to narcissism, which perhaps is surprising. Whether these are co-morbidities or they have a common pathophysiology at either a psychological or organic level is yet to be determined. It has been suggested that a combination of cognitive behavioural therapy and selective serotonin reuptake inhibitor prescription (a type of antidepressant) may improve the symptoms of muscle dysmorphia. While these treatment modalities would support a link between muscle dysmorphia and the psychological disorders described above, the efficacy of these treatment choices is still controversial, and there is unfortunately a high relapse rate. It is unfortunately a difficult disorder to ‘cure’, given that all folk need to eat regularly in order to live, and most folk incorporate exercise into their daily routines, which make managing ‘enough’ but not ‘excessive’ amounts of weightlifting and dietary regulation difficult to regulate in folk who have a disordered body image.

Muscle dysmorphia appears therefore to be a growing issue in contemporary society, which is increasing in tandem with the increased media-related marketing drive for the male ‘body beautiful’, which now appears to be operating at a similar level to the ‘drive for thinness’ media marketing which has blighted the female perception of body image for a long time, and has potentially led to an increased incidence of body image disorders such as anorexia nervosa and body dysmorphic syndrome. However none of these are gender specific, and it is not clear how much of a relationship these body image disorders have with either organic brain or clinical psychological disorders, as described above. It appears to be a problem mostly in young folk, with older folk being more accepting of their body abnormalities and imperfections, whether these are perceived or real, though sadly it appears that there is a growing incidence of muscle dysmorphia and other body image disorder in older age, as societies relationship and expectations of ‘old age’ changes. As I see my son become more ‘interested’ in his own physique and physical development, which must have obviously been caused by either discussions with his friends, or due to what he reads, or what the ‘actors’ in his computer games look like which he so enjoys playing, like all his friends, I hope he (and likewise my daughter) will always enjoy his sport but have a healthy self-image through the testing teenage and early adult period of time. I remember those bodybuilders my colleague Mike and I worked with all those years ago, and how some of them were comfortable with their large physiques, while with some it was clearly an ordeal to take off their shirts in order to be tested in the lab as part of the trials we did back then. The mind is very sensitive to suggestion, and it is fascinating to see that males now are being ‘barraged’ with advertising suggesting they are not good enough, and if they buy a certain product it will make them stronger, fitter, better, and thus more attractive, to perhaps the same level females have been subjected to for a long period of time. The mind is also sensitive to bullying, teasing and body shaming, as well as a host of other social issues which impinge on it particularly in its childhood and early adolescent development phases. It’s difficult to know where this issue will ‘end’, and whether governmental organizations will ‘crack down’ on such marketing and media hype which surely ‘target’ folks (usually perceived) physical inadequacies or desires, or if it is too late to do so and such media activity has become innate and part of the intrinsic fabric of our daily life and social experience. Perhaps education programs are the way to go at school level, though these are unfortunately often not successful.

There are so many daily challenges one has to deal with, it may seem almost bizarre that folk can spend time worrying about issues that are not even potentially ‘real’, but for the folk staring obsessively at themselves in the mirror, or struggling to stop the intrusive thoughts about their perceived physical shortcomings, these challenges are surely very real, and surely all-consuming and often overwhelming. In Greek mythology Adonis was a well-muscled half man, half god, whose was considered to be the ultimate in masculine beauty, and according to mythology his masculine beauty was so great that he won the love of Aphrodite, the queen of all the gods, because of it. Sadly for the folk with muscle dysmorphia, while they may be chasing this ideal, they are likely to be too busy working on creating their own perfect physique to have time to ‘woo’ their own Aphrodite, and indeed, contemporary Aphrodite’s don’t appear to even appreciate the level of muscularity they eventually obtain. The mirror on the wall, as it usually is, is a false siren, beckoning those weak enough to fall into its thrall – no matter how big, never to appear as the biggest or most beautiful of all.


The Core Requirement And Skill Of Decision-Making In Life – Removal Of Uncertainty Is Usually Positive And Cathartic But Is Also An Ephemeral Thing

This week, for the first time since moving to New Zealand and starting a new job here, I cycled in to work, and in the early afternoon faced a tough decision regarding whether I had the level of fitness capacity to cycle back home at the end of the day. Three-quarters of the way through the ride home, I felt very tired and stopped by the side of the road, and considered phoning home and asking them to pick me up. This morning I opened the fridge and had to decide whether to have the routine fruit and yogurt breakfast or the leftover piece of sausage roll. We have been six months in our new life and job here, and we have come to that period of time of deciding whether we have made a good decision and to continue, or whether we have made a disastrous error and need to make a rapid change. As I write this my wife asks me if I planned to go to the shop later, and if so whether I could get some milk for the family, and I had to stop writing and decide on whether I was indeed going to do so as part of the weekend post-writing chores, or not. All of these activities and issues required me to make decisions, and while some of them appeared to be of little consequence, some of them were potentially life and career changing, and, even if it seems a bit dramatic, potentially life-ending (whether to continue cycling when exhausted as a fifty-something). Decisions like these have to be made by everyone on a minute by minute basis as part of their routine daily life. The importance of decision-making in our daily lives, and how we make decisions, is still controversial and not well understood, which is surprising, given how much our optimal living condition and indeed survival depends on making correct decisions, and how often we have to make decisions, some of which are simple, some of which appear simple but are complex, and some of which are overtly complex.

Decision-making is defined as the cognitive process (which is the act or process of knowing or perceiving) resulting in the selection of a particular belief or course of action from several alternative possibilities, or as a problem-solving activity terminated by the genesis or arrival of a solution deemed to be satisfactory. At the heart of any decision-making is the requirement to choose between an array of different options, all of which usually have both positive and negative potential attributes and consequences, where one uses prior experience or a system of logical ‘steps’ to make the decision based on forecasting and scenario-setting for each possible alternative choice and consequence of choosing them. One of the best theoretical research articles on decision-making I have read / been involved with is one written by Dr Andy Renfree, an old colleague from the University of Worcester, and one of the Sport Science academic world’s most creative thinkers. As a systems level, he suggested that decisions are made based on either rational or heuristic principles, the former working best in ‘small world’ environments (in which the individual making the decision has absolute knowledge of all decision-related alternatives, consequences and probabilities), and the latter best in ‘large world’ environments (in which some relevant information is unknown or estimated). As described by Andy, rational decision-making is based on the principle that decisions can only be made if certain criteria are met, namely that the individuals making the decision must be faced with a set of behavioral alternatives and, importantly, information must be available for all possible alternatives of decisions that can be made, as well as of the statistical probability of all of the outcomes of the choices that can be made. This is obviously a large amount of requisite information, and a substantial period of time would be required to make a decision based on such ‘rational’ requirements. While using this method would likely be the most beneficial from a correct outcome perspective, it would also potentially place a high demand on the cognitive processes of the individual making the decision. Bayesian decision-making is a branch of rational decision-making theory, and suggests that decision-making is the result of unconscious probabilistic inferences. In Bayesian theory, a statistical approach to decision-making is made based on prior experience, with decision making valenced (and therefore speeded up) by applying a ‘bias’ towards information that is used to make the decision which is believed to be more ‘reliable’ than other information, and ‘probability’ of outcomes being better or worse based on prior experience. Therefore, in the Bayesian model, prior experience ‘speeds up’ decision making, though all information is still processed in this model.

In contrast, heuristic decision-making is a strategic method of making decisions, which ignores information that is available but is perceived to be less relevant to the specific decision being made, and which suggests that decisions are made based on key information and variables that are assessed and acted upon rapidly, in a manner that, as Andy suggests, incorporates ‘rule of thumb’ or ‘gut feel’ thinking, which places less demands on the cognitive thinking processes of the individual. As described above, rational decision-making may be more relevant in ‘small world’ environments, in which there are usually not a lot of variables or complexity which are required to be assessed prior to making a decision, and heuristic thinking in ‘large world’ environments, which are complex environments where all information, whether relevant or not, cannot be known, due to the presence not only of ‘known unknowns’ but also ‘unknown unknowns’, and where an individual would be potentially immobilized into a state of ‘cognitive paralysis’ if attempting to assess every option available. The problem or course is that even decisions that appear simple often have multiple layers of complexity that are not overt and of which the individual thinking about them is not aware, and it can be suggested that the concept of both rational and ‘small world’ environments are potentially abstract principles rather than reality, that all life occurs as part of ‘large world’ environments, and that heuristic processes are what are used by individuals as the main decision-making principles during all activities of daily living.

Of course, most folk would perceive that these rational and heuristic models are very computational and mathematical based, and that perhaps ‘feelings’ and ‘desires’ are also a component of decision-making, or at least these are how decision-making is perceived to ‘feel’ to them. As part of the Somatic Marker hypothesis, Antonio Damasio suggested that ‘body-loop’ associated emotional processes ‘guide’ (and have the potential to bias) decision-making behavior. In his theory, somatic markers are a specific ‘group of feelings’ in the body and are associated with specific emotions one perceives when confronted with, and are related to, the facts or choices one is faced with and need to make a decision about. There is suggested to be a different somatic marker for anxiety, enjoyment, or disgust, among other emotions, based on an aggregation of body-related symptoms for each, such as heart rate changes and the associated feeling of a pounding chest, the sensation of breathing changes, changes in body temperature, increased sweat rate, or the symptom of nausea, some or all of which together are part of a certain somatic marker group which creates the ‘feeling’ of a particular emotion. Each of these physiologically based body-loop ‘states’ are capable of being components of different somatic marker ‘groups’, which create the distinct ‘feelings’ which are associated with different emotions, and which would valence decisions differently depending on which somatic marker state / emotion is created by thinking of a specific option or choice. This hypothesis is based on earlier work by William James and colleagues more than a hundred years ago, which became the James-Lange theory of emotion, which suggests there is a ‘body-loop’ required for the ‘feeling’ of emotions in response to some external challenge, which is in turn required for decision-making processes related to the external challenge. The example used to explain this theory was that when one sees a snake, it creates a ‘body loop’ of raised heart rate, increased sweating, increased breath rate and the symptom of nausea, all of which in turn create the ‘feeling’ of fear once these ‘body-loop’ symptoms are perceived by the brain, and it was hypothesized that it is these body-generated feelings, rather than the sight of the snake itself, which induces both the feeling of fear and the decision to either rapidly run away or freeze and hope the snake moves away. While this model is contentious as it would make reactions occur slower than if a direct cognitive decision-making loop occurred, it does explain the concept of a ‘gut feel’ when decision-making. Related to this ‘body-loop’ theory, are other behavioral theories about decision-making, and it has been suggested that decisions are based on what the needs, preferences and values of an individual are, such as hunger, lust, thirst, fear, or moral viewpoint, but of course all of these could equally be described as components of either a rational or heuristic model, and psychological / emotional and cognitive / mathematical models of decision-making are surely not mutually exclusive conditions or theories.

These theories described above attempt to explain how and why we make decisions, but not what causes decisions to be right or wrong. Indeed, perhaps the most relevant issue to most folk is why they so often get decisions wrong. A simple reason may be that of ‘decision fatigue’, whereby the quality of decision-making deteriorates after a prolonged period of decision-making. In other words, one may simply ‘run out’ of the mental energy which is required to make sound decisions, perhaps due to ongoing changes in ‘somatic markers’ / body symptoms each time a decision is required to be made, which creates an energy cost that eventually ‘uses up’ mental energy (whatever mental energy is) over the period of time sequential decisions are required to be made. Astonishingly, judges working in court have been shown to make less favorable decisions as a court session progresses, and the number of favorable decisions improves after the judges have had a break. Apart from these data suggesting that one should ask for a court appearance early on in the morning or after a break, it also suggests that either physical or mental energy in these judges is finite, and ‘runs out’ with prolonged effort and the use of energy focusing on decision-making related to each case over the time period of a court session. There are other more subtle potential causes of poor-decision making. For example, confirmation bias occurs when folk selectively search for evidence that supports a certain decision that they ‘want’ to make, based on an inherent cognitive bias set in their mind by past events or upbringing, even if their ‘gut’ is telling them that it is the wrong decision. Cognitive inertia occurs when folk are unwilling to change their existing environment or thought patterns even when new evidence or circumstances suggest they should. People tend to remember more recent information and use it preferentially, or forget older information, even if the older information is potentially more valid. Repetition bias is caused by folk making decisions based on what they have been told, if it has been told to them by the greatest number of different people, and ‘groupthink’ is when peer pressure to conform to an opinion or group action causes the individual to make decisions they would not do if they were alone and not in the group. An ‘illusion of control’ in decision-making occurs where people have a tendency to under-estimate uncertainty because of a belief that they have more control over events that they actually have. While folk with anxiety tend to make either very conservative or paradoxically very rash decisions, sociopaths, who are thought to have little or no emotional ‘body-loop’, are very poor at making moral based decisions or judgments. Therefore, there are a whole lot of different factors which can impact negatively on decision-making, either due to one’s upbringing or prior history impacting on the historical memory which is used to valence decisions, or due to one’s current emotional or psychological state having a negative impact on decision-making capacity, and even simple fatigue can be the root cause of poor decision-making.

At the heart of decision-making (excusing the pun, from the perspective of the somatic marker hypothesis), is a desire of most folk to remove uncertainty from their lives, or change their life or situation to a better state or place as a result of their decision, or to remove a stressor from their life that will continue unless they make a decision on how to resolve it, remove it, or remove themselves from whatever causes the stressor. However, during my days as a researcher at the University of Cape Town, we suggested that conditions of uncertainty and certainty associated with information processing and decision-making are cyclical (we called it the ‘quantal packet’ information processing theory, for those interested). A chosen decision will change a position or state of uncertainty to one of certainty as one enacts changes based on the decision (or if one chooses to ‘wait and see’ and not alter anything) from the context that one is certain a change will occur based on what one has decided to do, even if one cannot be sure if this difference will be positive or negative while the changes are being enacted. However, with the passing of time, the effects of the decision made will attenuate, and uncertainty will eventually re-occur which require a further decision to be made, often with similar choices to which occurred when the initial decision was made. Underpinning this attenuation of the period of ‘certainty’ is the concept that although one will have factored in ‘known unknowns’ into any decision one makes using either rational or heuristic principles, ‘unknown unknowns’ will surely always occur that will cause even the best strategic decisions to require tactical adjustments, and those that are proven to be an error will need to be reviewed and changed. One can also ‘over-think’ decision-making as much as one can ‘under-think’ it, as well as being kept ‘hostage’ to cognitive biases from one’s past which continuously ‘trip one up’ when making decisions, despite one’s best intentions. Having said all of this, it often astonishes me not that folk get decisions wrong, but rather that they get so many decisions right. For example, when driving along a highway, one is reliant on the correct decisions of every driver that passes for one’s survival, from how much they choose to turn their steering wheel, to how much they use their brake for a corner, to an awareness in each of them that they are not too tired to be driving in the first place. It’s amazing when one thinks of how many decisions we make, either consciously or unconsciously, which so often turn out right, but equally it is the responsibility of each of us to work on the errors created by our past, or by our emotional state, or by ‘groupthink’, which we need to be vigilant about and remove as best possible from the psyche.

Making a decision is usually cathartic due to the removal of uncertainty and the associated anxiety which uncertainty often causes, even if the certainty and feeling of goodwill generated by making a decision is usually ephemeral and lasts only for a short period of time before other matters occupy one’s attention which require further decision-making. Pondering on my decision-making of the last week retrospectively, I think I made the right decision when choosing to cycle home after work, and to do so all the way home, even if I was exhausted when I got there, given that I did not collapse or have a heart attack when doing so, and there will surely be long term health benefits from two long cycles (though of course long is relative at my age!) in one day. I did choose the healthy food alternative for breakfast this morning, even though often I don’t, particularly during meals when I am tired after a long day’s work. I will get the milk my wife asked me to get this afternoon, in order to both get some fresh air after a creative morning of thinking and writing, and to maintain the harmony in our house and life, even though it is raining hard and I would prefer to be writing more or reading a good book this afternoon. The ‘jury is still out’ about whether this move to New Zealand and a new work role has been a good career and country move, and my current decision on this is to let more time pass before making an action-generating reasoned decision on it, though of course we have already moved several times to new places round the world in the last two decades, and the family is looking forward to some lifestyle stability in the next few years, and these factors need to be part of any reflection on a current-environment rating decision. Each of these decisions seemed ostensibly relatively simple to make when I made them, yet each surely had an associated entire host of different reasons, experiences, memories and requirements which were worked through in and by my mind before making them, as will be so for all folk making decisions on all aspects of their life during a routine day. What will I have for lunch now I am finished writing this and am now tired and in need of a break and sustenance? Perhaps I will leave off that decision and relax for a period of time before making lunch-related choices, so as not to make a fatigue-induced bad decision, and reach for that sausage roll, which still is in the fridge. And I need to get going and enact that decision I made to get the milk, and head off to the shops in order to do so as soon as possible, before lethargy set in and I change my mind, otherwise I will surely be in the ‘dog box’ at home later this afternoon, and my sense of cathartic peace resulting from having made these decisions will be even more ephemeral than usual!


Strategy, Tactics And Objectives – In The Words Of The Generals, You Can’t Bake A Cake Without Breaking A Few Eggs

I have always enjoyed reading history, and particularly military history, both as a hobby and as a way of learning from the past in order to better understand the currents and tides of political and social life that ‘batter one’ during one’s three score and ten years on earth, no matter how much one tries to avoid them. Compared to folk who lived in the first half of the twentieth century, I perceive that we have lived our contemporary lives in an environment that is relatively peaceful from the context that there has been no world-war or major conflict for the last 70 or 80 years, though the world-wide political fluxes recently, particularly in the USA and Europe / UK, are worrying, as is the rising nationalism, divisive ‘single choice’ politics, intolerance of minorities, and increasing number of refugees searching for better lives, all eerily reminiscent of what occurred in the decade before the American Civil War and both World Wars. I recently read (or actually re-read – a particularly odd trait of mine is that I often read books a dozen or more times if I find something in them important or compelling from a learning perspective) a book on the Western Allies European military strategy in the second World War, and of the disagreements that occurred between the United States General (and later President) Dwight Eisenhower and British General Bernard Montgomery over strategy and tactics used during the campaign, and how this conflict damaged relations between the military leaders of the two countries almost irreparably. I also re-read two autobiographies of soldiers involved in the war, the first by Major Dick Winters, who was in charge of a Company (Easy Company) of soldiers in the 506th Parachute Infantry Regiment of the 101st USA Airborne Division, and the second an (apparently) autobiographical book written by Guy Sajer (if that was indeed his name), a soldier in the German Werhmacht, about his personal experiences first as a lorry driver, then as a soldier on the Eastern front in the GrossDeutschland Division, and was struck by how different both the two books were in content compared to the one on higher European military strategy, and also how different the experiences were between Generals and foot soldiers, even though they were all involved in the same conflict. All this got me thinking of objectives, strategy and tactics, and how they are set, and how they impact on the folk that have to carry them out.

Both strategy and tactics are developed in order to achieve a particular objective (also known as a goal). An objective is defined as a desired result that a person or system envisions, plans, and commits to achieve. The leaders of most organizations, whether they are military, political, academic or social set out a number of objectives they would like to achieve, for the greater good of the organization they lead (though it is never acknowledged, of course, that they – the leaders – will get credit or glory for achieving the objective, and that this is often an ‘underlying’ objective in itself). In order to achieve an objective, a leader, or group of leaders, set a particular strategy in order to do so. There are a number of different definitions of strategy, including it being a ‘high level’ plan to achieve an objective under conditions of uncertainty, or making decisions about how to best use resources available in the presence of often conflicting options, requirements and challenges in order to achieve a particular objective. The concept underpinning strategic planning is to set a plan / course of action that is believed that will be best suited to achieve the objective, and stick to that plan until the objective is achieved. If conditions change in a way that makes sticking to the strategy difficult, then tactics are used to compensate and adjust to the conditions while ‘maintaining’ the overall strategic plan. Tactics as a concept are often confused with strategy – but are in effect the means and methods of how a strategy is implemented, adhered to, and maintained, and can be altered in order to maintain the chosen strategy.

What is strategy and what are tactics becomes challenging when there are different ‘levels’ of command in an organization, with lower levels having more specific objectives which are individually required in order to achieve the over-arching objective, but which require the creation of specific ‘lower-level’ strategy, in order to reach the specific objective being set, even if the objective is a component of a higher level strategic plan. From the viewpoint of the planners that create the high-level / general objective strategy, the lower level plans / specific objectives would be tactics. From the viewpoint of the planners that set the lower-level strategy needed to complete a specific component of the general strategy, their ‘lower level’ plans would be (to them) strategy rather than tactics, with tactics being set at even lower levels in their specific area of command / management, which in turn could set up a further ‘debate’ about what is strategy and what is tactics at these even ‘lower’ level of command. Even the poor foot soldier, who is a ‘doer’ rather than a ‘planner’ of any strategic plan or tactical action enacted as part of any higher level of command, would have their own objectives beyond those of the ‘greater plan’, most likely that of staying alive, and would have his or her own strategic plan to both fulfil the orders given to them, but stay alive, and tactics of how to do so. So in any organization, there are multiple levels of planning and objective setting, and what is strategy and what is tactics often becomes confused (and often commanders at lower level of command find orders given to them inexplicable, as they don’t have awareness of how their particular orders fit into the ‘greater strategic plan’), and this requires constant management by those at each level of command.

It is perhaps not being clear about what the specific objectives behind the creation of a particular strategy are which causes most command conflict, and is what happened in the later stages of the second World War as one of the main causes of the deterioration of the relationship between Dwight Eisenhower and Bernard Montgomery. The objective of the Allies in Western Europe was relatively simple – enter Europe and defeat Germany (though of course the war was mostly won and lost on the Eastern front due to Russian sacrifice and German strategic confusion) – but it was the strategy of how this was to happen which led to the inter-ally conflict, of which so much has been written. Eisenhower was the supreme Allied Commander, and responsible for all the Allied troops in Western Europe, and for setting the highest level of strategic planning. He decided on a ‘broad front’ strategy, where different Army Groups advanced eastwards across Europe after the breakout from Normandy, in a line from the northern coast of Europe to the southern coastline of Mediterranean Europe. Montgomery was originally the commander of all Allied ground troops in Europe, then after the Normandy breakout became commander of the 21st Army group, which was predominantly made up of British and Commonwealth troops (but also containing a large contingent of American troops), and he favoured a single, ‘sharp’ method of attacking one specific region of the front (of course choosing an area for attack in his own region of command). Montgomery’s doctrine was that which most strategic manuals would favour, and Eisenhower was sharply criticized by military leaders both during and after the war for going against the accepted strategic ‘thinking’ of that time. But Eisenhower of course had not just military objectives to think about, and had also political requirements too, and had to maintain harmony between not just American and British troops and nations, but also a number of Commonwealth countries troops and national requirements. If he had chosen one specific ‘single thrust’ strategy, as Montgomery demanded, he would have had to choose either a British dominated or American dominated attack, led by either a specific British or American commander, and neither country would have ‘tolerated’ such ‘favouritism’ on his part, and this issue was surely a large factor when he decided on a ‘broad front’ strategy. There was clearly military strategic thinking on his part too – ‘single thrust’ strategies can be rapidly ‘beaten back’ / ‘pinched off’ if performed against a still-strong military opposition, as was the case when Montgomery chose to attack on a very narrow line to Arnhem, and this was more than a ‘bridge too far’ – the German troops simply shut off the ‘corridor’ of advance behind the lead troops and the Allies were forced to withdraw in what was a tactical defeat for them. Montgomery criticized Eisenhower’s ‘broad front’ as leading to, or allowing, the ‘Battle of the Bulge’ to occur, when the German armies in late 1944 counter-attacked through the Belgium Ardennes region towards Antwerp, and caused a ‘reverse bulge’ in the Allied ‘broad front’ line, but in effect the rapidity with which the Allies closed down and defeated this last German ‘counter-thrust’ paradoxically provided evidence against the benefits of Montgomery’s ‘single thrust’ strategy, even though he used the German Ardennes offensive to condemn Eisenhower’s ‘broad front’ strategy. Perhaps Eisenhower should have been more clear about the political nature of his objectives and the political requirements of his planning, but then he would have been criticized for allowing political factors to ‘cloud’ what should have been purely military decisions (at least by his critics), so like many leaders setting ‘high level’ strategy, he was ‘doomed’ to be criticized whatever his strategic planning was, even if the ‘proof was in the pudding’ – his chosen strategy did win the war, and did so in less than a year after it was initiated, after the Allies had been at war for more than five years before the invasion of Western Europe was planned and initiated.

Whatever the ‘high level’ strategic decision made by the Generals, the situation ‘on the ground’ for Company leaders and foot soldiers who had to enact these strategies was very different, as was well described in the books by Dick Winters (the book became a highly praised TV series – Band of Brothers) and Guy Sajer. Most of the individual company level actions in which Easy company participated in bordered on the shambolic – from the first parachute drop into enemy held France where most of the troops were scattered so widely that they fought mainly skirmishes in small units, to operations supporting Montgomery’s ‘thrust’ to Arnhem which were a tactical failure and resulted in them withdrawing in defeat, to the battle of Bastogne which was a key component of the battle of the ‘Bulge’, where they just avoided defeat and sustained heavy casualties, and only just managed to ‘hold on’ until reinforcements arrived. A large number of their operations described were therefore not tactically successful, yet played their part in a grand strategy which lead to ultimate success. The impact of the ‘grand strategy’ on individual soldiers was horrifyingly (but beautifully from a writing perspective) described and a must read for any prospective military history ‘buffs’ in Guy Sajer’s autobiography – most of his time was spend marching in bitter cold or thick mud from one area of the Eastern front to another as his Division was required to stem yet another Russian breakthrough, or trying to find food with no formal rations being brought up to them as the Werhmacht operational management collapsed in the last phases of the war, or watching his friends being killed one by one in horrific ways as the Russian army grew more successful and more aggressive in their desire for both revenge and military success. There was no obvious pattern or strategy to what they were doing at the foot soldier level, there were no military objectives that could be made sense of at the individual level he described, rather there was only the ‘brute will to survive’, and to kill or be killed, and only near the end, did he (and his company level leaders) realize that they were actually losing the war, and their defeat would mean the annihilation of Germany and everything they were fighting for ‘back home’. Yet is was surely the individual actions of soldiers in their thousands and millions that endured and died for either side, that in a gestalt way lead to the strategic success (or failure) planned for by their leaders and generals, even if at their individual level they could make little sense of the benefit of their sacrifice in the context of the broader tactical and strategic requirements, in the times when they could reflect on this, though surely most of their own thoughts were on surviving anther terrible day, or another terrible battle, rather than on its ‘meaning’ or relevance.

One of the quotes that I have read in military history texts that has caused me to reflect most about war and strategy as an ‘amateur’ military history enthusiast is attributed to British World War Two Air Marshal Peter Portal, who when discussing some what he believed to be defective strategic planning with his colleague and army equal Field Marshal Alan Brooke, apparently suggested that ‘one cannot make a cake without breaking some eggs’. What he was saying, if I understood it, and the comment indeed can be attributed to him, was that in order for a military strategy to be successful, some (actually most of the time probably many) individual solders have to be sacrificed and die for the ‘greater good’ which would be a successfully achieved objective. From a strategic point of view he was surely correct, and often Generals who don’t take risks and worry too much about their soldiers safety can paradoxically often cause more harm than good by developing an overly cautious strategy which has an increased risk of failure and therefore an increased risk of more soldiers dying. But from a human point of view the comment is surely chilling, as each soldier’s individual death, often in brutal conditions, is horrific both to those that it happens to and those relatives, friends and colleagues that survive them. Often, or perhaps most of the time, individual soldiers die without any real understanding of the strategic purpose behind their death, and with a wish just to be with their loved ones again, and to be far from the environment and actions which cause their death. The folk at senior leadership levels setting grand strategy require a high degree of moral courage to ‘see it through’ to the end, knowing that their strategy will surely lead to a number of individual deaths. The folk who enact the grand strategy ‘in the trenches’ need a high degree of physical courage to perform the required actions to do so in conditions of grave danger, that as a small part of the ‘big picture’ may help lead to strategic success and attainment of the set objectives, usually winning in a war sense. But every side has its winners and its losers, and there is usually little difference between these for the foot soldier or Company leader, who dies in either a winning or losing cause, with little knowledge of how their death has contributed in any way to either winning or losing a battle, or campaign, or war.

Without objectives, strategy and tactics, there would never be any successful outcome to any war, and a lot of soldiers would die. With objectives, tactics and strategy, there is a greater chance of a successful outcome to any war, but a lot of soldier will still surely die. The victory cake tastes wonderful always, but always, sadly, to make such a ‘winners’ cake, many eggs do indeed need to be broken. It will long be controversial which is more important in the creation of the cake, the recipe or the eggs that make it up. Similarly, it will long be controversial whether it is relevant whether a ‘broad front’ or ‘single thrust’ strategy was the correct strategic or tactical approach to winning the war in Western Europe. But, the foot solder would surely not care whether his or her death was in the cause of tactical or strategic requirements, or happened during a ‘broad front’ or ‘single thrust’ strategy, when he or she is long dead and long forgotten, and historians are debating which General deserves credit for planning the strategy, or lack of it, that caused their death. That’s something I will ponder on as I reach for my next book on war strategy that fill the book shelf next to my writing desk, and hope that my children will never be in the position of having to be either the creators, or enactors, of military strategy, tactics and objectives.


The Collective Unconscious And Synchronicity – Are We All Created, Held Together And United As One By Mystic Bonds Emanating From The Psyche

Earlier this week I thought of an old friend and work colleague I had not been in contact with for many years, Professor Patrick Neary, who works and lives in Canada, and a few hours later an email arrived from him with all his news and recent life history detailed in it and in which he said he had thought of me this week and wondered what I was up to. Yesterday in preparation for writing this article, I was reading up and battling to understand the concept of the psychological ‘Shadow’, one of Carl Jung’s fascinating theories, and noticed a few hours later that Angie Vorster, a brilliant Psychologist we recently employed as a staff member in our Medical School to assist struggling students, posted an article on the ‘Shadow’ in her Facebook support page for Medical Students. Occasionally when I am standing in a room filled with folk, I feel ‘energy’ from someone I can’t see, and turn around and a person is staring at me. Watching a video last night, in a scene about religious fervour, all the folk in a church were seen raising their hands in the air to celebrate their Lord. Earlier that afternoon I couldn’t help noticing that a whole stadium of people watching a rugby game raised their hands in the air, in the same way as those did in the church, to celebrate when their team scored the winning try. Sadly, perhaps because I read too much existentialism related text when I was young, I don’t have any capacity to believe in a God or a religion, but on a windy day, when I am near a river or the ocean, I can’t help raising my hands to the sky and looking upwards, acknowledging almost unconsciously some deity or creative force that perhaps created the magical world we inhabit for three score years and ten. All of these got me thinking of Carl Jung, perhaps one of my favourite academic Psychologists and historical scientific figures, and his fascinating theories of the collective unconscious and synchronicity, which were his attempts to explain his belief that we all have similar psychological building blocks that are inter-connected and possibly a united ‘one’ at some deep or currently not understood level of life.

Carl Jung lived and produced his major creative work in the first few decades of the 20th century, in what some folk call the golden era of Psychology, where he and colleagues Sigmund Freud, Alfred Adler, Stanley Hall, Sandor Ferenczi and many others changed both our understanding of how the mind works and our understanding of the world itself. He was influenced by, and for a period was, a protégé of Sigmund Freud, until they fell out when Jung began distancing himself from Freud’s tunnel vision view that the entire unconscious and all psychological pathology had an underlying sexual focus and origin. He acknowledged Freud’s contribution of describing and delineating the unconscious as an entity, but thought that the unconscious was a ‘process’ where a number of lusts, instincts, desires and future wishes ‘battled’ with rational understanding and logical ‘thoughts’, all which occurred at a ‘level’ beyond that perceived by our conscious mind. He went further though, and after a number of travels to India, Africa and other continents and countries, where he did field studies of (so-called) ‘primitive’ tribes, he postulated that all folk had what he called a collective unconscious, which contained a person’s primordial beliefs, thought structures, and perceptual boundary creating ‘archetypes’ which were all universal, inherent (as they occurred in tribes and people which had not interacted together for thousands of years due to geographical constraints), and responsible for creating and maintaining both one’s world view and personality.

To understand Jung’s theory of the collective unconscious and its underpinning archetypes, one has to understand a debate that has not been successfully ‘settled’ since the time of Aristotle and Plato. Aristotle (and other folk who became known later as the empiricists) believed that all that can be known or occur is a product of experience and life lived. In this world view, the idea of the ‘Tabula rasa’ (blank slate) predominates, which suggests that all individuals are born without ‘built-in’ mental ‘knowledge’ and therefore that all knowledge needs to be developed by experience and perceptual processes which ‘observes’ life and makes sense of it. Plato (and other folk who became known as Platonists, or alternately rationalists) believed that ‘universals’ exist and occur which are independent of human life processes, and which are ‘present’ in our brain and mental structures from the time we were born and that these universals ‘give us’ our understanding of life and how ‘it’ works. For example, Plato used the example of a horse – there are many different types, sizes and colours of horses, but we all understand the ‘concept’ of a horse, and this ‘concept’ in Plato’s opinion was ‘free-standing’ and exists as a ‘universal’ or ‘template’ which ‘pre-figures’ the existence of the actual horse itself (obviously religion and the idea that we are created by some deity according to his plan for us would fall into the platonic ‘camp’ / way of thinking). This argument about whether ‘universals’ exist or whether we are ‘nothing’ / a Tabula rasa without developed empirical experience has never been completely resolved, and it is perhaps unlikely that it will ever be unless we have a great development of the capacity or structures of our mental processes and function.

Jung took the Platonist view, and believed that at a very deep level of the unconscious there were primordial, or ‘archetypical’ psychological universals that existed, which have been defined as innate, universal prototypes for all ‘ideas’ which may be used to interpret observations. Similar to the idea that one’s body is created based on a template ‘stored’ in one’s DNA, in his collective unconscious theory the archetypes were the psychological equivalents of DNA (though of course DNA was discovered many years after Jung wrote about the collective unconscious and synchronicity) and the template from which all ideas and concepts developed, and which are the frame of reference of how all occurrences in the world around one are interpreted. Some archetypes that he (and others) gave names to were the mother figure, the wise old man figure, the hero figure, the ego and shadow (one’s positive and negative ‘sense of self’) and the anima and animus (the ‘other’ gender component of one’s personality) archetypes, amongst others. He thought that these were the ‘primordial images’ which both filtered and in many ways created ones ‘world view’ and governed how one reacted to life. For example, if one believed that one’s own personality was that of a ‘hero’ figure’, and ‘chooses it’ as one’s principle archetype, one would respond to life accordingly, and constant try to solve challenges in a heroic way. In contrast, if one based one’s sense of self on a ‘wise old man’ (perhaps to be gender indiscriminate it should have been described as a ‘wise old person’) archetype, one would respond to life and perceived ‘challenges’ in a wise ‘old man way’ rather than a ‘heroic’ figure way. How he came to develop these specific archetypes was by examining the religious symbols and motifs used across different geographically separated tribes and communities, and found that there were these similar ‘images’, or ‘archetypes’ as he called them, that occurred across these diverse groups of folk and were revered by them as images of worship and / or as personality types to be deified. Jung suggested that from these ‘basic’ archetypes an individual could create their own particular archetypes as they developed, or one’s ‘self’ could be a combination of several of them – but also that there were specific archetypes that resided in each individual and were similar across all living individuals and these were conservatively maintained across generations as ‘universals’.

Jung went even further in exploring the ‘oneness’ of all folk with his theory of synchronicity, which suggested that events that occur are ‘meaningful coincidences’ if they occur with no (apparent) causal relationship, but appear to be ‘meaningfully related’. He was always somewhat vague about exactly what he meant by synchronicity. In the ‘light’ version he suggested that the archetypes which are the same in all people allow us all to ‘be’ (or at least think) similarly. In the ‘extreme’ version of this theory (which was also called ‘Unus mundus’, which is Latin for ‘one world’) it is suggested that we all belong to an ‘underlying unified reality’, and are essentially ‘one’, with our archetypes allowing our individual ‘reality’ to emerge as perceptually different to other folk and unique to us, but this archetype generated reality is illusory and ‘filtered’, and comes from the same ‘Unus mundus’ in which and of which we all exist, and to which we all eventually return. He based this observation on similar events to those that which I described above as happening to me, where friends contacted him when he was thinking of them, and when events happened to different folk geographically separate that were so similar that to him the laws of chance and statistical probability could not explain them away. While these theories may appear to be somewhat ‘wild’ in their breadth of vision, it is notable that Physics as a discipline explores this very concept of ‘action at a distance’ as ‘nonlocality’ theories, which are defined as the concept that an object can be moved, changed, or otherwise affected without being physically touched by another object. The theories of relativity and quantum mechanics, whether one believes them or not, are underpinned by these concepts, which similarly, as described above, underpin Jung’s theory of synchronicity.

It is very difficult to either prove or refute Jung’s theories of the collective unconscious, archetypes, and synchronicity, and they have therefore often been given ‘short thrift’ by the contemporary scientific community. But Jung is not to blame that even today our neuroscience and brain and mental monitoring devices are so primitive that they have not helped us at all understand either basic brain function or how the rich mosaic of everyone’s own private mental life occurs and is maintained, and he would say it is the fact that we each ‘choose’ different archetypes for our own identity and as a filter of life that makes it ‘feel’ to us as if we are isolated individuals living a discrete and ‘detached’ life, and perceive that our life is ‘different’ to all others. It has also been suggested that the reason why we have similar beliefs and make people out to be heroes, or wise men, or mother figures, in our life, is not because of archetypes but rather because we have similar experiences and respond to our environment and the symbolism that is ‘seen’ during our daily life, is evident in churches and religious groups, in politics and group management activities, and in advertising (marketers have made great use of archetypes to influence our choices by how they create adverts since Jung suggested these concepts – think of the use of snake and apple motifs, apart from the kind mother or heroic father archetypes which are so often used in adverts) on a continuous basis. Jung would answer in a chicken and egg way, and ask where did all these symbols, motifs and group responses originate from if they were not created or developed from something deep inside us / our psyche? His theory of synchronicity has also been criticized by some as being confused with pure chance and probability, or as an example of a confirmation bias in folk (a tendency to search and interpret new information in a way that confirms one’s preconceptions), and the term apophenia has been developed to describe the mistaken detection of meaning in random or meaningless data. But how then does one explain my friend writing to me this week when I was thinking about him a day or two before his email arrived, or how when I am battling with to understand a psychological concept the psychologist I work with posts an explanation of exactly what I am battling with (even if I have never told her I am working on understanding these concepts this week) on Facebook, or how the ‘feeling’ that one has that someone is watching one occurs, and when turning around one finds that they are indeed watching you. These may indeed be chance, and I may be suffering from ‘apophenia’, but the opposite may also be true.

I have been a scientist and academic for nearly thirty years now, and have developed a healthy scepticism and ‘nonsense-ometer’ for most theories and suggestions which seem outrageous and difficult to prove with rigorous scientific measurements (or the lack of them). But there is something in Carl Jung’s theories of the collective unconscious, archetypes and synchronicity that strike a deep chord in me and my ‘gut feel’ is that they are right, even though with our contemporary scientific measuring devices there is no way they can be either surely proved or disproved. Perhaps this is because I want to and enjoy ‘connecting’ with folk and is caused by some inherent psychological need or weakness in my psyche (or because I have chosen the wrong ‘archetype’ / my current sense of self does not ‘fit’ the life I have chosen and this creates a dissonance that makes me want to believe that Jung was right – how’s that for some real ‘psychobabble’!). But this morning my wonderful daughter, Helen (age 8), gave me a card she had made at school after all the girls in her class had been given a card template to colour in, and the general motif / image on the card (and I assume on all the printed cards) was that of a superman – it’s difficult not to believe that a chosen ‘hero’ motif does not provide evidence for an archetype when such is chosen by a school-teacher as what kids should use to describe their father (though surely myself like most dads are not deserving of such a description). This afternoon I will take the kids and dogs for a walk around the dam around where I live, and will very likely raise my hands to the water and wind and sky around me when I do so, as much as it is likely that the folk who will be going to church at the same time will be raising their hands to their chosen God, and those going to watch their team’s football match this afternoon will raise their hands to the sky when their team scores – all doing what surely generations of our ancestors did in the time before now. While we all appear to act so differently during out routine daily life, there is always a similar response amongst most folk (excluding psychopaths, but that is for another article / another day) to real tragedy, or real crises, or real good news, when it occurs, and so often folk will admit if pushed to that they appeal either to a ‘hero’ figure to protect or save them in time of danger, or a ‘mother’ figure to help ‘heal their pain’ after tragedy occurs, and these calls for help’ / succour are surely archetype related (and indeed it has been suggested that the image of God has been created as a ‘hero’ or ‘father’ figure out of an archetype by religious folk – though equally religious folk would say if there are archetypes, they may have been created in their God’s image).

Our chosen archetypes creates a filter and a prism through which life and folks behaviour might appear different, and indeed may be different, but at the level of the hypothesized ‘collective unconscious’, in all of us, there is surely similarity, and perhaps, just perhaps, as Jung suggests, we are all ‘one’, or at least that mystic bonds are indeed connecting us at some deep level of the psyche or at some energy level we currently don’t understand and can’t measure. How these occur or were generated as ‘universals’ as per the thinking of Jung and Plato, is perhaps for another day, or perhaps another generation, to explain. Unus mundus or Tabula rasa? Collective unconscious or unique individual identity? Mystic connecting bonds or splendid isolation? I’ll ponder on these issues as I push the ‘publish’ button, and send this out to all of you, in the hope that it ‘synchronises’ in some way with at least some of you that read it, though of course via Jung’s ‘mystic bonds’ you may already be aware of all I have written!


Testosterone And Its Androgenic Anabolic Derivatives – One Small Drop Of Liquid Hormone That Can A Man Make And Can A Man Break

I watched a great FA Cup football final last night, and was amused as always when players confronted each other after tackles with aggressive postures and pouting anger-filled stares – all occurring in front of a huge crowd looking on and under the eyes of the referee to protect them. On Twitter yesterday and this morning I was engaged in a fun scientific debate with some male colleagues and noted that each time the arguments became ‘ad hominem’ the protagonists became aggressive and challenging in their responses, and only calmed down and became civil again when they realized it is banter. I have over many years watched my wonderful son grow up daily, and now he is ten have observed some changes occurring in him that are related to increasing development of ‘maleness’ which occurs in all young men of his age. In my twenties while completing my medical and PhD training, I worked part time as a bouncer, and it was always fascinating to see the behaviour of males in the bars and clubs I worked in then change when around females ‘dressed to kill’ and out for the evening. With the addition of alcohol this became a dangerous ‘cocktail’ late in the evenings, with often violence breaking out as the young men tried to establish their dominance and ‘turf’, or as a result of perceived negative slights which ‘honour’ demanded they respond to, and which resulted in a lot of work for me in the bouncer role to sort out. All this got me thinking of the male hormone testosterone and its effect on males through their lifetime, both good and bad.

Testosterone is the principal male sex hormone that ‘creates’ the male body and mind from the genetic chromosomal template supplied at conception. It is mostly secreted by the testicles in men, and to a lesser degree from the ovaries in women, with some secretion also from the adrenal glands. There is approximately 7-8 times higher concentration of testosterone in males than females, but it is present also in females, and females are susceptible to (and may even be more sensitive to) its actions. Testosterone is a steroid type hormone, derived originally from cholesterol related chemical substances which are turned into testosterone through a complex pathway of intermediate substances. Its output from the testes (or ovaries) is stimulated by a complex cascade of neuro-hormonal signals that arise from brain structures (gonadotrophin release hormone is released by the hypothalamus structure in the brain and travels to the pituitary gland, which in turn releases luteinizing hormone and follicle stimulating hormone, which travels in the blood to the testicles and in turn cause the release of testosterone into the bloodstream) in response to a variety of external and internal stimuli (though what controls testosterone’s release, and how it is controlled, in this cyclical manner over many years is almost completely unknown). The nature of ‘maleness’ has been debated as a concept since antiquity, but it was in the 1800’s that real breakthroughs in the understanding that there was a biological basis to ‘maleness’ occurred, with hormones being identified as chemical substances in the blood, and several scientist folk such as Charles Brown-Sequard doing astonishing things like crushing up testicles and injecting the resultant product into their own bodies to demonstrate the ‘rejuvenating’ effect of the ‘male elixir’. Eventually in the late 1800’s testosterone was isolated as the male hormone – it was named as a conglomerate derivative of the words testicle, sterol and ketone – and in the 1930’s, the ‘golden age’ of steroid chemistry, its structure was identified, and synthetic versions of testosterone were produced as medical treatment analogues for folk suffering from low testosterone production due to hypogonadism (reduced production of testosterone due to testicular function abnormality) or hypogonadotropism (reduced production of testosterone due to dysfunction of the ‘higher’ level testosterone release control pathways in the brain described above).

Testosterone acts in both an anabolic (muscle and other body tissue building) and androgenic (male sex characteristic development) manner, and one of the most fascinating things about it is that it acts in a ‘pulsatile’ manner during life – increasing dramatically at very specific times in a person’s life to effect changes that are absolutely essential for both the development and maintenance of ‘maleness’. For example, in the first few weeks after conception in males there is a spike in testosterone concentration in the foetus that results in the development of genitals and prostate gland. Again, in the first few weeks after birth testosterone concentrations rise dramatically, before attenuating in childhood, after which a further increase in the pre-puberty and the pubertal phases occurs, when it is responsible for increases in muscle and bone mass, the appearance of pubic and axillary hair, adult-type body odour and oily skin, increased facial hair, deepening of the voice, and all of the other features associated with (but not all exclusive to) ‘maleness’. If one of these phases are ‘missed’, normal male development does not occur. As males age, the effects of continuously raised testosterone associated with adulthood become evident as loss of scalp hair (male pattern baldness) and increased body hair, amongst other changes. From around the age of 55 testosterone levels decrease significantly, and remain low in old age. Raised testosterone levels have been related to a number of clinical conditions that in the past have been higher in males than females, such as heart attacks, strokes and lipid profile abnormalities, along with increased risk of prostate (of course it’s not surprising that this is a male specific disorder) and other cancers, although not all studies support these findings, and the differences in the gender-specific risk of cardiovascular disorders in particular is decreasing as society has ‘equalized’ and women’s work and social lives have become more similar to those of males in comparison to the more patriarchal societies of the past.

More interesting than the perhaps ‘obvious’ physical effects are the psychological effects of testosterone on ‘male type’ behaviour, though of course the ‘borders’ between what is male or female type behaviour are difficult to clearly delineate. Across most species testosterone levels have been shown to be strongly correlated with sexual arousal, and in animal studies when an ‘in heat’ female is introduced to a group of males, their testosterone levels and sex ‘drive’ increases dramatically. Testosterone has also been correlated with ‘dominance’ behaviour. One of the most interesting studies I have ever read about was one where the effect of testosterone on monkey troop behaviour was examined, in which there are strict social hierarchies, with a dominant male who leads the troop, submissive males who do not challenge the male, and females which are ‘serviced’ only by the dominant male and do not challenge his authority. When synthetic testosterone was injected into the males, it was found that the dominant male become increasingly ‘dominant’ and aggressive, and showed ‘challenge’ behaviour (standing tall with taught muscles in a ‘fight’ posture, angry facial expressions, and angry calls, amongst others) more often than usual, but in contrast, there was no effect or change of the testosterone injections on non-dominant male monkeys. When the females were injected with testosterone, most of them became aggressive, and challenged the dominant male and fought with him. In some cases the females beat the dominant male in fighting challenges, and became the leader of the troop. Most interestingly, these ‘became dominant’ females, when the testosterone injections were discontinued, did not revert back to their prior submissive status, but remained the troop leader and maintained their dominant behaviour even with ‘usual’ female levels of testosterone. This fascinating study showed that there is not only a biological effect of testosterone in social dominance and hierarchy structures, but that there is also ‘learned’ behaviour, and when one’s role in society is established, it is not challenged whatever the testosterone level.

Raised testosterone levels have also been linked with level of aggression, alcoholism, and criminality (being higher in all of these conditions) though this is controversial, and not all studies support these links, and it is not clear from the ‘chicken and egg’ perspective if increased aggression and antisocial behaviour is a cause of increased testosterone levels, or is a result of it. It is also been found that athletes have higher levels of testosterone (both males and females) during sport participation, as have folk watching sporting events. In contrast, both being ‘in love’ and fatherhood appears to decrease levels of testosterone in males, and this may be a ‘protective’ mechanism to attenuate the chance of a male ‘turning against’ or being aggressive towards their own partners or children. Whether this is true or not requires further work, but clearly there is a large psychological and sociological component to both the functionality and requirements of testosterone, beyond its biological effects. One of the most interesting research projects I have been involved with was at the University of Cape Town in the 1990’s, where along with Professor Mike Lambert and Mike Hislop, we studied the effect of testosterone ingestion (and reduction of testosterone / medical castration) on male and female study participants. We found not only changes in muscle size and mass in those taking testosterone supplements, but also that participants ingesting or injecting testosterone had to control their aggression levels and be ‘careful’ of their behaviour in social situations, while women participants described that their sex drive increased dramatically when ingesting synthetic testosterone. In contrast, men who were medically castrated described that their libido was decreased during the study time period when their testosterone levels were reduced by testosterone antagonist drugs to very low levels (interestingly they only realized this ‘absence’ of libido after being asked about it). All these study results confirm that testosterone concentration changes induce both psychological and social outcomes and not just physical effects.

Given in particular its anabolic effects, testosterone and its synthetic chemical derivatives, known commonly as anabolic steroids, became attractive as a performance enhancing drug by athletes in the late 1950’s and 1960’s as a result of it being massed produced synthetically from the 1930’s, and as athletes became aware of its muscle and therefore strength building capacity after its use in clinical populations. Until the 1980’s, when testing for it as a banned substance meant it became risky to use it, anabolic steroids were used by a large number of athletes, particularly in the strength and speed based sporting disciplines. Most folk over 40 years old will remember Ben Johnson, the 1988 Olympic 100m sprint champion, being stripped of his winner’s medal for testing positive for an anabolic steroid hormone during a routine within-competition drug test. Testosterone is still routinely used by body-builders, and worryingly, a growing number of school level athletes are being suggested to be using anabolic steroids, as well as a growth of its use as a ‘designer drug’ in gyms to increase muscle mass in those that have body image concerns. An interesting study / article pointed out that boy’s toys have grown much more ‘muscular’ since the 1950’s, and that this is perhaps a sign that society places more ‘value’ on increased muscle development and size in contemporary males, and this in a circular manner probably puts more pressure on adolescent males to increase their muscle size and strength due to perceived societal demands, and thereby increases the pressure on them to take anabolic steroids. There is also suggested to be an increase in the psychological disorder known as ‘muscle dysmorphia’ or ‘reverse anorexia’ in males, where (mostly) young men believe that no matter how big they are muscle size wise, they are actually thin and ‘weedy’, and they ‘see’ their body shape incorrectly when looking in the mirror. This muscle dysmorphia population is obviously highly prone to the use of (perhaps one should say abuse) anabolic steroids as a group. There appears to be also an increase in anabolic steroid use in the older male population group, perhaps due to a combination of concerns about diminishing ‘male’ function with increasing age, a desire to maintain sporting prowess and dominance, and a perception that a muscular ‘body beautiful’ is still desirable by society even in old age – which is a concern due to the increased cardiovascular and prostate cancer risks taking anabolic steroids can create in an already at-risk population group. There is also a growth in the number of women taking anabolic steroid / synthetic testosterone, both due to its anabolic effects and its (generally) positive effects on sex drive, and a number of women body builders use anabolic steroids for competitive reasons due to its anabolic effect on muscles, despite the risk of the development of clitoral enlargement, deepening voice, and male type hair growth, amongst other side effects, which potentially can result from females using anabolic steroids. Anabolic steroid use therefore remains an ongoing societal issue that needs addressing and further research, to understand both its incidence and prevalence, and to determine why specific population groups choose to use them.

It has always been amazing to me that a tiny biological molecule / hormone, which testosterone is, can have such major effects not only on developing male physical characteristics, but also on behavioural and social activity and interactions with other folk, and in potentially setting hierarchal structures in society, though surely this ‘overt’ effect has been attenuated in modern society where there are checks and balances on male aggression and dominance, and females now have equal chances to men in both the workplace and leadership role selection. Testosterone clearly has a hugely important role in creating a successfully functioning male both personally and from a societal perspective, but testosterone can also be every males ‘worst enemy’ without social and personal ‘higher level’ restraints on its potential unfettered actions and ways of working. It has a magic in its function when its effects are seen on my young son as he approaches puberty and suddenly his body and way of thinking changes, or when its effects are seen (from its diminishment) in the changes of a man in love or in a new father. Perhaps there is magic also in the reduction of testosterone that occurs in old age, as this is likely to be important in allowing the ‘regeneration’ of social structures, by allowing new younger leaders to take over from previously dominant males, by this attenuation of testosterone levels perhaps making older males ‘realize’ / more easily accept that their physical and other capacities are diminished enough to ‘walk away’ gracefully from their life roles without the surges of competitive and aggressive ‘feelings’ and desires a continuously high level of testosterone may engender in them if it continued to be high into old age. But testosterone has an ugliness in its actions too, which was evident in my time working as a bouncer in bars and clubs, when young men became violent with other young men as a way of demonstrating their ‘maleness’ to the young females who happened to be in the same club and were the (usually) unwitting co-actors in this male mating ritual drama which enacted itself routinely on most Friday and Saturday nights, usually fuelled by too much alcohol. Its ugliness is also evident on the sporting field when males kick other men lying helpless on the ground in a surge of anger due to losing the game or for a previous slight, despite doing so within the view of a referee, spectators and TV cameras. Its ugliness is also evident in the violence that one sees in fans after a soccer game preying on rival fans due to their testosterone levels being high due to watching the game, and in a myriad of other social situations where males try to become dominant to lever the best possible situation or to attract the best possible mate for themselves, at the expense of all those around them – whether in a social or work situation, or a Twitter discussion, or even a political or an academic debate – the ‘male posturing’ is evident for all to see in each situation, whether it is physical or psychological. Perhaps it was not for the sake of a horseshoe that the battle was lost, but rather because of too little, or too much, testosterone coursing around the veins of those directing it. There are few examples as compelling as that of the function of the hormone testosterone in making male behaviour what it is which demonstrates how complex, exquisite and essential the relationship between biological factors and psychological behaviour and social interplay is. What truly ‘makes up’ a man and what represents ‘maleness’ though, is of course another story, and for another debate!


Athlete Pre-Screening For Cardiac And Other Clinical Disorders – Is It Beneficial Or A Classic Example Of Screening And Diagnostic Creep

Last week the cycling world was rocked by the death of an elite cyclist, who died competing in a professional race of an apparent heart attack. A few years ago when living in the UK, the case of a professional football player who collapsed in the middle of a game as a result of having a heart attack, and only survived thanks to the prompt intervention of pitch-side Sports Medicine Physicians and other First Aid folk received a lot of media attention, and there were calls for increased vigilance and screening of athletes for heart disorders. Many years ago, one of my good friends from my kayaking days, Daniel Conradie, who apart from being a fantastic person won a number of paddling races, collapsed while paddling in the sea and died doing what he loved best of an apparent heart attack. Remembering all of these incidents got me thinking of young folk who die during sporting events, and if we clinical folk can prevent these or at least pick up potential risk factors in them before they do sport, which is known as athlete screening, or pre-screening of athlete populations, and which is still a controversial concept and is not uniformly practiced across countries and sports for a variety of reasons.

Screening as a general concept is defined as a strategy used in populations to identify the possible presence of an ‘as-yet-undiagnosed’ disorder in individuals who up to the point of screening have not presented or reported either symptoms (what one ‘feels’ when one is ill) or signs (what one physically ‘presents with’ / what the clinician can physically see, feel or hear when one is ill). Most medicine is about managing patients who present with a certain disorder or symptom complex who want to be cured or at least treated to retain an optimal state of functioning. Screening for potential disorders is as described a strategic method of pre-emptively diagnosing a potential illness or disorder, in order to treat it before it manifests in an overt manner, in the hope of reducing later morbidity (suffering as a result of an illness)and mortality (dying as a result of the illness) in those folk being screened. It is also enacted to reduce the cost and burden of clinical care which would be the result of the illnesses not being picked up until it is too late to treat them conservatively with lifestyle related or occupational changes, and costly medical interventions are needed which put a drain on the resources of the state or organizing body which consider the need for screening in the first place. Universal screening involves screening all folk in a certain selected category (such as general athlete screening), while case finding screening involves screening a smaller group of folk based on the presence of identified risk factors in them, such as if a sibling is diagnosed with cancer or a hereditary disorder.

For a screening program to be deemed necessary and effective, it has to fulfil what are known as Wilson’s screening criteria – the condition should be an important health problem, the natural history of the condition should be understood, there should be a recognisable latent or early symptomatic stage, there should be a test which is easy to perform and interpret and is reliable and sensitive (not have too many false positive or false negative results), the resultant treatment of a condition diagnosed by the condition should be more effective if started early as a result of screening-related diagnosis, there should be a policy on who should be treated if they are picked up by the screening program, and diagnosis and treatment should be cost-effective, amongst other criteria. Unfortunately, there are some ‘side-effects’ of screening programs. Overscreening is when screening occurs as a resultant of ‘defensive’ medicine (when clinicians screen patients simply to prevent themselves being sued in the future if they miss a diagnosis) or physician financial bias, where physicians who stand to make financial gain as a result of performing screening tests (sadly) advocate large population screening protocols in order to make a personal profit from them. Screening creep is when over time recommendations for screening are made for populations with less risk than in the past, until eventually the cost/benefit ration of doing them becomes less than marginal, but they are continued for the same reasons as for overscreening. Diagnostic creep occurs when over time, the requirements for making a diagnosis are lowered with fewer symptoms and signs needed to classify someone as having either an overt disease, or when folk are diagnosed as having a ‘pre-clinical’ or ‘subclinical’ disease. Patient demand is when patients push for screening of a disease or disorder themselves after hearing about them and being concerned about their own or their family’s welfare. All of these contribute to making the implementation of a particular screening program to be almost always a controversial process which requires careful consideration and an understanding of one’s own personal (often subconscious) biases when making decisions related to screening or not screening populations either as a clinician, health manager or member of the public.

Regarding specifically athlete screening, there is still a lot of controversy regarding who should be screened, what they should be screened for, how they should be screened, and who should manage the screening process. Currently, to my knowledge, Italy is the only country in the world where there is a legal requirement for pre-screening of athlete populations and children before they start playing sport at school (including not just physical examination but also ECG-level heart function analysis). In the USA, American Heart Association guidelines (history, examination, blood pressure and auscultation – listening to the heart with a stethoscope – of heart sounds) are recommended but practice differs between states. In the UK, athlete screening is not mandatory, and the choice is left up to different sporting bodies. In the Nordic countries, screening of elite athletes is mandated at the government level, but not all athlete populations as per what happens in Italy. There is ongoing debate about who should manage athlete screening in most countries, with some folk feeling it should be controlled at government level and legislated accordingly, other folk suggesting it should be controlled by professional medical bodies such as the American Heart Association in the USA or the European Society of Cardiology in Europe, while other folk believe it should be controlled by the individual sporting bodies which manage each different sporting discipline or even separately by the individual teams or schools that want to protect both the athletes and themselves by doing so. Obviously who pays for the screening factor is a large factor in these debates, and perhaps there is no unanimity in policy across countries, clinical associations and sporting bodies as described above, because of this.

The fact that there is no clear world-wide policy on athlete screening is on the one hand surprising, given the often emotional calls to enact it each time a young athlete dies, and also because the data from Italian studies has shown that the implementation of their all-population screening programs has reduced the incidence of sudden death in athletes from around 3.5/100 000 to around 0.4/100 000 (for those interested these data are described in a great study by Domenico Corrado and colleagues in the journal JAMA). But, the data described also suggests that there is a relatively low mortality rate to start with – from the above figures of 100 000 folk playing sport, only 3.5 of these died when playing sport before the implementation of screening, and a far higher number of folk die each day from a variety of other clinical disorders. The number of folk ‘saved’ is also very small in relation to the cost – a study by Amir Halkin and colleagues calculated that based on cost-projections of the Italian study, a 20 year program similar to that conducted by the Italians over 20 years of ECG testing of young competitive athletes would cost between 51 and 69 billion dollars and would save around 4800 lives, and the cost therefore per life saved was likely to range between 10 and 14 million dollars. While each life lost is an absolute tragedy both for that person and their family and friends, most lawmakers and government / governing bodies would surely think very carefully before enacting such expensive screening trials, with such low cost/benefit ratios, again with high burdens of other diseases that require their attention and funds on a continuous basis to be managed in parallel with athlete deaths. So from this ‘pickup’ rate and cost/benefit ratio perspective one can see there is already reason for concern regarding the implementation of broad screening trials for athlete populations.

Of equal concern is that of the level of both false negative and false positive tests associated with athlete screening. False negatives occur when tests do not pick up underlying abnormalities or problems, and in the case of heart screening, if one does not include ECG evaluation in the testing ‘battery’ there is often a high rate of false negative results described for athlete testing. Even using ECG’s are not ‘fail-proof’, and some folk advocate that heart-specific testing should include even more advanced testing than ECG can offer, including ultrasound and MRI based heart examination techniques, but these are very expensive and even less cost effective than those described above. False positives occur when tests diagnose a disorder or disease in athletes that is not clinically relevant or indeed does not exist. In athletes this is a particular problem when screening for heart disorders, as doing exercise routinely is known to often increase heart size to cope with the increased blood flow requirements which are part of any athletic endeavour, and this is called ‘athlete’s heart’. One of the major causes of sudden death is a heart disorder known as hypertrophic cardiomyopathy, where the heart pathologically enlarges or dilates, and it is very difficult to tell the difference on most screening tests between athletes heart and hypertrophic cardiomyopathy, with several folk diagnosed as having the latter and prevented from doing sport, when their heart is ‘normally’ enlarged rather than pathologically as a result of their sport participation. A relevant study of elite athletes in Australia by Maria Brosnan and colleagues found that when testing them using ECG level heart test, of 1197 athletes tested, 186 of these were found to have concerning ECG results (in their studies using updated ECG pathology criteria this number dropped to 48), but after more technically advanced testing of these concerning cases, only three athletes were found to have heart pathology that required them to stop their sport participation, which are astonishing figures from a potential false positive perspective. Such false-positive tests can result in potential loss of future sport related earnings or other sport participation related benefits.

Beyond false-negative and false-positive tests, there are a number of other factors which ensure that mass athlete screening remains controversial. For example, Erik Solberg and colleagues reported that while the majority of athletes were happy to undergo ECG and other screening, 16% of football players were scared that the pre-screening would have consequences to their own health, while 13% of them were afraid of losing their licence to play football, and 3% experienced overt distress during pre-screening itself because of undergoing the tests per se. The issue of civil liberties versus state control therefore needs to come into consideration in debates such as screening of athletes as a ‘blanket’ requirement if it is enacted. While most athlete screening programs and debate focusses on heart problems, there are a number of other non-cardiac causes of sudden death in athletes, such as exercise-induced anaphylaxis (an acute allergic response exacerbated by exercise participation), exercise-associated hyponatremia, exertional heat illness, intracranial aneurysms and a whole lot of other clinical disorders, and the debate is further complicated by whether these ‘other’ disorder should be included in the screening process. Furthermore, most screening programs focus on young athletes, while a large number of older folk begin doing sport at a later age, often after a long period of sedentary behaviour, and these older ‘new’ or returning sport enthusiasts are surely at an even higher risk of heart-related morbidity or mortality during exercise, and therefore one needs to think of whether screening should incorporate such folk too. However, whether there should be older age specific screening for a variety of clinical disorders is as hotly debated and controversial as it is young athlete screening, and adding screening of them for exercise specific potential issues surely complicates the matter to an even greater degree, even if an argument can be made that it is surely needed.

In summary therefore, screening of athletes for clinical disorders that may harm or even kill them during their participation in the sport they perform is still a very controversial area of both legislation and practice. There is an emotional pain deep in the ‘gut’ each time one hears of someone dying in a race, and a feeling that as a clinician or person that one should do more, or more should be done to ‘protect them from themselves’ using screening as the tool to do so. But given the low cost/benefit ratio from both a financial and ‘pickup’ perspective, it is not clear if making a country-wide decision to conduct athlete screening is not an example of both screening and diagnostic creep, or if athlete screening satisfies Wilson’s criteria to any sufficient degree. If I was a government official, my answer to whether I would advocate country-wide screening would be no based on the low cost/benefit ratio. If I was a member of a medical health association, to this same question I would answer yes, both from an ethical and a regulatory perspective, as long as my association did not have to foot the bill for it. If I was head of a sport governing body, I would say yes to protect the governing body’s integrity and to protect the athletes I governed, as long as I did not have to foot the bill for it. If I was a clinical researcher, I would say no, as we do not know enough about the efficacy of athlete screening and because there is a too high level of false-positive and false-negative results. If I was a sports medicine doctor I would say yes, as this would be my daily job, and I would benefit financially from it. If I was an athlete, I would be ambivalent, saying yes from a self-protection perspective, but saying no from a job and income protection perspective. If I was the father of a young athlete, I would say yes, to be sure my child is safe and would not be harmed by playing sport, but I would also worry about the psychological and social aspects if he or she was prohibited from playing sport as a result of a positive heart or other clinical screening test. It is in these conflicting answers I myself give when casting myself in these different roles, to which I am sure if each of you reading this article answered yourself would also similarly give a wide array of different responses, is perhaps where the controversy in athlete screening originates and what will make it always contentious. I do think that if as a newly qualified clinician back then in our paddling days, if I had tested my great friend Daniel Conradie’s heart function and found something that was worrying and suggested he stop paddling because of it, he would probably have told me to ‘take a hike’ and continued paddling even with such knowledge. I am sure as a young athlete I would have done similar if someone had told me they were worried about something in my health profile back then but were not one hundred percent sure of it having a negative future consequence on my sporting activity and future life prospects. Athlete screening tests and decisions related to them will almost always be about chance and risk, rather than certainty and conclusive determination of outcomes. To race or not race, based on a chance of perhaps being damaged by racing, or even dying, given the outcome of a test that warns you, but may be either false-positive or false-negative, that is the question. What would you do in such a situation, as an athlete, as a governing body official, or as a legislator? That’s something to ponder that doesn’t seem to have an easy answer, no matter how tragic it is to see someone young dying while doing what they love doing best.


%d bloggers like this: