Homeostasis And The Constancy Principle – We Are All Creatures Of Comfort Even When We Go Out Of Our Comfort Zone

It is autumn in our part of the world, and the first chills are in the air in the late evening and early morning, and the family discussed last night the need to get out our warm clothes from storage in readiness for the approaching winter, in order to be well prepared for its arrival. After sharing in the fun of Easter Sunday yesterday and eating some chocolate eggs with the children, a persistent voice in my head this morning instructed me to eat less than normal today to ‘make up’ for this out of the normal chocolate eating yesterday. It is a beautiful sunny day outside as I write this, and I feel a strong ‘urge’ to stop writing and go out on a long cycle ride because of it, and have to ‘will’ these thoughts away and continue writing, which is my routine activity at this time of morning. After a recent health scare I have been checking on my own physical parameters with more care than normal, and found it interesting when checking what ‘normal’ values for healthy folk are, that most healthy folk have fairly similar values for things like blood glucose, blood pressure, cholesterol concentrations and other such parameters, and that there are fairly tight ranges of values of each of these which are considered normal and a sign of ‘health’, and if one’s values are outside of these, it is a sign of something wrong in the working of your body that needs to be treated and brought back into the normal range either by lifestyle changes, medication, or surgical procedures. All of these got me thinking about the regulatory processes that ensure that the body maintains its working ‘parts’ in a similar range in all folk, and the concept of homeostasis, which as a regulatory principle explains and underpins the maintenance of this ‘safe zone’ for our body’s multiple activities, including the sensing of any external or internal changes which could be associated with the potential for one of the variables to go out of the ‘safe zone’, and initiates changes either behaviourally or physiologically which attempt to bring the variable at risk back into the ‘safe zone’ either pre-emptively or reactively.

Homeostasis is defined scientifically as the tendency towards a relatively stable equilibrium between inter-dependent elements. The word was generated from the Greek concepts of ‘homiois’ (similar) and ‘stasis’ (standing still), creating the concept of ‘staying the same’. Put simply, homeostasis is the property of a system whereby it attempts to maintain itself in a stable, constant condition, and resists any changes or actions on the system which may change or destabilize the stable state. It’s origins as a concept were from the ancient Greeks, with Empedocles in around 400 BC suggesting that all matter consisted of elements which were in ‘dynamic opposition’ or ‘alliance’ with each other, and that balance or ‘harmony’ of all these elements was necessary for the survival of the individual or organism. Around the same time, Hippocrates suggested that health was a result of the ‘harmonious’ balance of the body’s elements, and illness due to ‘disharmony’ of the elements which it was made up of. Modern development of this concept was initiated by Claude Bernard in the 1870’s, who suggested that the stability of the body’s internal environment was ‘necessary for a free and independent life’ and that ‘external variations are at every instant compensated for and brought into balance’, and Walter Cannon in the 1920’s first formally called this concept of ‘staying the same’ homeostasis. Claude Bernard actually initially used the word ‘constancy’ rather than homeostasis to describe the concept, and interestingly, a lot of Sigmund Freud’s basic work on human psychology was based on the need for ‘constancy’ (though he did not cross-reference this more physiological / physical work and concepts), and that everyone’s basic needs were for psychological constancy or ‘peace’, and when one had an ‘itch to scratch’ one would do anything possible to remove the ‘itch’ (whether it be a new partner, a better house, an improved social status, or greater social dominance, amongst other potentially unrequited desires), and further that one’s ‘muscles are the conduit through which the ego imposes its will upon the world’. He and other psychologists of his era suggested that if an ‘itch’, urge or desire was not assuaged (and what causes these urges, whether a feeling of inadequacy, or previous trauma, or a desire for ‘wholeness’, is still controversial and still not clearly elicited even today), the individual would remain out of their required ‘zone of constancy’, and would feel negative emotions such as anxiety, irritation or anger until the urge or desire was relieved. If it was not relieved for a prolonged period this unrequited ‘itch’ could lead to the development of a complex, projection or psychological breakdown (such as depression, mania, anxiety, personality disorder or frank psychosis). Therefore, as much as there are physical homeostasis related requirements, there are potentially also similarly psychological homeostasis related requirements which are being reacted to by the brain and body on a continuous basis.

Any system operating using homeostatic principles (and all our body systems do so) has setpoint levels for whatever substance or process is being regulated in the system, and boundary conditions for the substance or process which are rigidly maintained and cannot be exceeded without a response occurring which would attempt to bring the activity or changes to the substance or process back to the predetermined setpoint levels or within the boundary conditions for them. The reasons for having these set boundary conditions are protective, in that if they were exceeded, the expectation would be the system would be damaged if the substance or process being regulated (for example, oxygen, glucose, sodium, temperature, cholesterol, or blood pressure, amongst a whole host of others) was used up too quickly or worked too hard, or was allowed to build up to toxic / extremely high levels or not used enough to produce life-supporting substrates or useable fuels, which would endanger the life and potential for continued activity of the system being monitored. For example, oxygen shortage results in death fairly quickly, as would glucose shortage, while glucose excess (known as diabetes) can also result in cellular and organ damage, and ultimately death if it is not controlled properly. In order for any system to maintain the substance or process within homeostasis-related acceptable limits, three regulatory factors (which are all components of what is known as a negative feedback loop) are required to be components of the system. The first is the presence of a sensory apparatus that can detect either changes in whatever substance or process is being monitored, or changes in the internal or external environment or other systems which interact with or impact on the substance or process being monitored. The second is a control structure or process which would be sent the information from the sensory apparatus, and would be able to make a decision regarding whether to respond to the information or to ignore it as not relevant. The third is an ‘effector’ mechanism or process which would receive commands from the control structure after it had made a decision to initiate a response in response to the sensed perturbation potentially affecting the system it controls, and make the changes to the system decided upon by the control structure in order to maintain or return the perturbed system to its setpoint value range.

The example of temperature regulation demonstrates both the complexity and beauty of homeostasis in regulating activity and protecting us on a continuous basis from harm. Physiological systems in most species of animals are particularly sensitive to changes in temperature and operate best in a relatively narrow ranges of temperature, although in some species a wider range of temperatures is tolerated. There are two broad mechanisms used by different organisms to control their internal temperature, namely ectothermic and endothermic regulation. Ectothermic temperature regulators (also known as ‘cold-blooded’ species), such as the frog, snake, and lizard, do not use many internal body processes to maintain temperature in the range which is acceptable for their survival, but rather use external, environmental heat sources to regulate their body temperature. If the temperature is colder, they will use the sun to heat themselves up, and if warm, they will look for shadier conditions. Ectotherms therefore have energy efficient mechanisms of maintaining temperature homeostasis, but are more susceptible to vagaries in environmental conditions compared to endotherms. In contrast, endotherms (also known as ‘warm-blooded’ species), into which classification humans fall, use internal body activity and functions to either generate heat in cold environments or reduce heat in warm conditions. In endotherms, if the external environment is too cold, and if the cold environment impacts on body temperature, temperature receptors measuring either surface skin temperature or core body temperature will send signals to the brain, which subsequently initiates a shiver response in the muscles, which increases metabolic rate and provides greater body warmth as a by-product of fuel / energy breakdown and use. If environmental temperature is too warm, of if skin or core temperature is too high, receptors will send signals to brain areas which initiates a chain of events involving different nerve and blood-related control processes which result in increased blood flow to the skin by vasodilatation, thereby increasing blood cooling capacity and sweat rate from the skin, thus producing cooling by water evaporation. All these endotherm associated heating and cooling processes utilize a large amount of energy, so from an energy perspective are not as efficient as that of ectotherms, but they do allow a greater independence from environmental fluctuations in temperature. It must be noted that endotherms also use similar behavioural techniques to ectotherms, such as moving into shady or cool environments if excessively hot, but as described above, can tolerate a greater range of environmental temperatures and conditions. Furthermore, humans are capable of ‘high level’ behavioural changes such as putting on or taking off clothes, in either a reactive or anticipatory way. It is evident therefore that for each variable being homeostatically monitored and managed (on a continuous basis) there are a complex array of responsive (and ‘higher-level’ pre-emptive) options available with which to counteract the potential or actual ‘movement’ of the variable beyond its ‘allowed’ metabolic setpoints and ranges.

There are a number of questions still to be answered regarding how homeostasis ‘works’ and how ‘decisions’ related to homeostasis occur. It is not clear how the regulatory mechanisms know which variable they ‘choose’ to defend as a priority. Brain oxygen would surely be the most important variable to ‘defend’, as would perhaps blood glucose levels, but how decisions are made and responses initiated for these variables preferentially, which may impact negatively on other systems with their own homeostatic requirements, is not clear. Furthermore, there is the capacity for ‘conflict’ between physical and psychological homeostatic mechanisms when homeostatic-related decisions are required to be made. For example, one’s ego may require one to run a marathon to fulfill a need to ‘show’ one’s peers that one is ‘tough’ by completing such a challenging goal, but doing so (running the marathon) creates major physical stress for and on the physical body. Indeed, some folk push themselves so hard during marathons that they collapse, even if they ‘feel’ warning signs of impending collapse, or of an impending heart attack, and choose to keep running despite these symptoms. To these folk, the psychological need to complete the event must be greater than the physical need to protect themselves from harm, and their regulatory decision-making processes clearly valences psychological homeostasis to be of greater importance than physiological homeostasis when deciding to continue exercising in the presence of such warning symptoms. However, running a marathon, while increasing physical risk of catastrophic physical events during the running of it, if done on a repetitive basis has positive physical benefits, such as weight loss and increased metabolic efficiency of the heart, lungs, muscles and other organ structures, along with enhanced psychological well-being which would be derived from achieving the set athletic performance-related goals. Therefore, ‘decision-making’ on an issue such as running a marathon is complex from a homeostasis perspective, with both short and long term potential benefits and harmful consequences. How these contradictory requirements and factors are ‘decided upon’ by the brain when attempting to maintain both psychological and physical homeostasis is still not clear.

A further challenge to homeostatic regulation is evident in the examples of when one has a fever, where a high temperature may paradoxically be beneficial, and after a heart attack, where an altered heart rate and blood pressure setpoint may be part of compensatory mechanisms to ensure the optimal function of a failing heart. While these altered values are potentially ‘outside’ of the ‘healthy’ setpoint level range, they may have utilitarian value and would be metabolically appropriate in relation to either a fever or failing heart. How the regulatory homeostatic control mechanisms ‘know’ that these altered metabolic setpoints are beneficial rather than harmful, and ‘accepts’ them as temporary or permanent new setpoints, or whether these altered values are associated with routine homeostatic corrective responses which are part of the body’s ongoing attempt to induce healing in the presence of fever or heart failure (amongst other homeostatically paradoxical examples), is still not clear. Whether homeostasis as a principle extends beyond merely controlling our body’s activity and behaviour, to more general societal or environmental control, is also still controversial. For example, James Lovelock, with his Gaia hypothesis, has suggested that the world in its entirety is regulated by homeostatic principles, and global temperature increases result in compensatory changes on the earth and in the atmosphere that lead to eventual cooling of the earth, and this warming and cooling continues in a cyclical manner – and most folk who believe in global warming as a contemporary unique catastrophic event don’t like this theory, even if it is difficult to support or refute without measuring temperature changes accurately over millennia.

Homeostatic control mechanisms can fail, and indeed our deaths are sometimes suggested to be the result of a failure of homeostasis. For example, cancer cells overwhelm cellular homeostatic protective mechanisms, or develop rapidly due to uncontrolled cellular proliferation of abnormal cells which are not inhibited by the regular cellular homeostatic negative feedback control mechanisms, which lead to physical damage to the body and ultimately our death, for these or other reasons that we are still not aware of. In contrast, Sigmund Freud, in his always contrary view of life, suggested as part of his Thanatos theory that death in the ultimate form of ‘rest’ and is our ‘baseline’ constancy-related resting state which we ‘go back to’ when dying (with suicide being a direct ‘mechanism’ of reaching this state in those whose psyche are operating too far away from their psychological setpoints, whatever these are), although again this is a difficult theory to either prove or disprove. Finally, what is challenging to a lot of folk about homeostasis from a control / regulatory perspective is that it is a conceptual ‘entity’ rather than a physical process that one can ‘show’ to be ‘real’, much like Plato’s Universals (to Plato the physical cow itself was less relevant than the ‘concept’ of a cow, and he suggested that one can only have ‘mere opinions’ of the former, while one has absolute knowledge of the latter, given the physical cow changes as it grows, ages, and dies, while the ‘concept’ of a cow is immutable and eternal). It is always difficult scientifically to provide categorical evidence which either refutes or support concepts such as universals and non-physical general control theories, even if they are concepts which appear to underpin all life as we know it, and without which function we could not exist in our current physical form and living environment.

As I look out the window at the falling autumn leaves and wonder whether we will have a very cold winter this year and whether we have prepared adequately for it clothes-wise (pre-emptive long-term homeostatic planning at its best, even if perhaps a bit ‘over-the-top’), while taking off my jersey as I write this given that the temperature has increased as the day has changed from morning to afternoon (surely a reactive homeostatic response), and as I ponder my health-related parameters, and work out how I am going to get those that need improvement as close to ‘normal’ as possible (surely as part of behavioural homeostatic / health-optimization planning), I look forward to that bike ride now I have managed to delay gratification of doing so until I have completed writing this (and feel a sense of well-being both from doing so and by realizing I am now ‘free’ to go on the ride and by doing so can remove the psychological ‘itch’ that makes me want to do it and therefore return to a state of psychological ‘constancy’ / homeostasis). Contemplating all of these, it is astonishing to think that all of what I, and pretty much all folk, do is underpinned by a desire to be, and maintain life, in a ‘comfort zone’ which feels right for me, and which is best for my bodily functions and psychological state. Given that all folk in the world have similar physical parameters when we measure them clinically, it is likely that our ‘comfort zones’ both physically and psychologically are not that different in the end. Perhaps the relative weighting which each of us assigns to our psychological or physical ‘needs’ create minor differences between us (and occasionally major differences such as in folk with psychopathology or with those who have significant lifestyle related physical disorders), though at the ‘heart of it all’, both psychologically and physically, is surely the over-arching principle of homeostasis. While on the bike this afternoon, I’ll ponder on the big questions related to homeostasis which still need to be answered, such as how homeostasis-related decisions are made, how the same principle can regulate not just our body, but also our behaviour, and perhaps that of societal and even planetary function, and how ‘universals’ originated and which came first, the physical entity or the universal. Sadly I think it will need a very long ride to solve these unanswered questions, and remove the ‘itch that needs scratching’ which arises from thinking of these concepts as a scientist who wants to solve them – and I don’t like to spend too long out of my comfort zone, which is multi-factorial and not purely bike-focused, but rather is part bike, part desk, part comfy chair, the latter of which will surely become more attractive after a few hours of cycling, and will ‘call me home’ to my next ‘comfort zone’, probably long before I can solve any of these complex issues while out on the ride watching the autumn leaves fall under a beautiful warm blue sky, with my winter cycling jacket unused but packed in my bike’s carrier bag in case of a change in the weather.


Contemporary Medical Training And Societal Medical Requirements – How Does One Balance The Manifest Need for General Practitioners With Modern Super-Specialist Trends

For all of my career, since starting as a medical student at the University of Cape Town as an 18 year old fresh out of school many years ago, I have been involved in the medical and health provision and training world, and have had a wonderful career first as a clinician, then as a research scientist, then in the last number of years managing and leading health science and medical school research and training. Because of this background and career, I have always pondered long and hard about what makes a good clinician, what is the best training to make a good clinician, how we define what a ‘good’ clinician is, and how we best align the skills of the clinicians we train with the needs and requirements of the country’s social and health environments in which they trained. A few weeks ago I had a health scare which was treated rapidly and successfully by a super-specialist cardiologist, and I was home the next day after the intervention, and ‘hale and hearty’ a few days after the procedure. If I had lived 50 years ago, and it had happened then, in the absence of modern high-tech equipment and the super-specialists skills, I would probably have died a slow and uncomfortable death treated with drugs of doubtful efficacy that would not have much benefited me much, let alone treat the condition I was suffering from. Conversely, despite my great respect for these super-specialist skills which helped me so successfully a few weeks ago, it has become increasingly obvious that this great success in clinical specialist training has come at the cost of reduced emphasis on general practitioner-focused training, and a reduction in the number of medical students choosing general practitioner work as a career after they qualify, which has caused problems both to clinical service delivery in a number of countries, particularly in rural areas of countries, and paradoxically put greater strain on specialist services despite their pre-eminence in contemporary clinical practice in most countries around the world. My own experience with grappling with this problem of how to increase general practitioners as an outcome of our training programs, as a Head of School of Medicine previously, and this recent health scare which was treated so successfully by super-specialist intervention, got me thinking of how best we can manage the contradictory requirements of the need for both general practitioners and specialists in contemporary society, and whether this conundrum should be best managed by medical schools, health and hospital management boards, or government-led strategic development planning initiatives.

It is perhaps not surprising, given the exponential development of technological innovations that originated in the industrial revolution and which changed how we live, that medical work also changed and became more technologically focused, which in turn required both increased time and increased specialization of clinical training to utilize these developing technologies, such as surgical, radiology investigative and laboratory-based diagnostic techniques. The hospital (Groote Schuur) and medical school (University of Cape Town) where I was trained was famous for the achievements of Professor Chris Barnard and his team’s work performing the first heart transplant there, using a host of advanced surgical techniques, heart-lung machines to keep the patients alive without a heart for a brief period of time, and state-of-the-art immunotherapy techniques to resist heart rejection, all specialist techniques he and his team took many years to master in some great medical schools and hospitals in the USA. Perhaps in part because of this, our training was very ‘high-tech’, consisting of early years spent learning basic anatomy, physiology and pathology-based science, and then later years spent in surgical, medical, and other clinical specialty wards, mostly watching and learning from observation of clinical specialists going about their business treating patients. If I remember it correctly, there were only a few weeks of community-based clinical educational learning, very little integrative ‘holistic’ patient-based learning, and almost no ‘soft-skill’ training, such as optimal communication with patients, working as part of a team with other health care workers such as nurses and physiotherapists, or learning to help patients in their daily home environment and social infrastructure. There was also almost no training whatsoever in the benefits of ‘exercise as medicine’, or of the concept of wellness (where one focuses on keeping folk healthy before they get ill, rather than dealing with the consequences of illness). This type of ‘specialist-focused’ training was common, particularly in Western countries, for most of the last fifty or so years, and as a typical product of this specialist training system, for example, I chose first clinical research and then basic research rather than more patient-focused work as my career choice, and a number of my colleagues from my University of Cape Town medical training class of 1990 have had superb careers as super-specialists in top clinical institutions and hospitals all around the world.

This increasing specialization of clinical training and practice, such as the example of my own medical training described above, has unfortunately had a negative impact both on general practitioner numbers and primary care capacity. A general practitioner (GP) is defined as a medical doctor who treats acute and chronic illnesses and provides preventative care and health education to patients, and who has a holistic approach to clinical practice that takes all of biological, social and psychological factors into consideration when treating patients. Primary care is defined as the day-to-day healthcare of patients and communities, with the primary care providers (GP’s, nurses, health associates or social workers, amongst others) usually being the first contact point for patients, referring patients on to specialist care (in secondary or tertiary care hospitals), and coordinating and managing the long term treatment of patient health after discharge from either secondary or tertiary care if it is needed. In the ‘old days’, GP’s used to work in their community often where they were born and raised, worked 24 hours a day as needed, and maintained their relationship with their patients through most or all of their lives. Unfortunately, for a variety of reasons, GP work has changed, and they now often work set hours, patients are rotated through different GP’s in a practice, and the number of graduating doctors choosing to be GP’s is diminishing, and there is an increasing shortage of GP’s in communities and particularly rural areas of most countries as a result. Sadly, GP work is often regarded as being of lower prestige than specialist work, the pay for GP’s has often been lower than that of specialists, and with the decreased absolute number of GPs, the work burden on many GP’s has increased (and paradoxically with computers and electronic facilities the note and recording taking requirements of GP’s appears to have increased rather than decreased) leading to increased level of burnout and GP’s choosing to turn to other clinical roles or to leave the medical profession completely, which exacerbates the GP shortage problem in a circular manner. Training of GP’s has also evolved into specialty-type training, with doctors having to spend 3-5 years ‘specializing’ as a GP (often today called Family Practitioners or Community Health Doctors), and this also has paradoxically potentially put folk off a GP career, and lengthens the time required before folk intent on becoming GP’s can do so and become board certified / capable of entering or starting a clinical GP practice. As the number of GP’s decrease, it means more folk go directly to hospital casualties as their first ‘port of call’ when ill, and this puts a greater burden on hospitals, which somewhat ironically also creates an increased burden on specialists, who mostly work in such hospitals, and who end up seeing more of these folk who could often be treated very capably by GP’s. This paradoxically allows specialists less time to do the specialist and super-specialist roles they spent so many years training for, with the result that waiting list and times for ‘cold’ (non-emergency) cases increases, and hospital patient care suffers due to patient volume overload.

At a number of levels of strategic management of medical training and physician supply planning, there have been moves to counter this super-specialist focus of training and to encourage folk to consider GP training as an appealing career option. The Royal College of Physicians and Surgeons of Canada produced a strategic clinical training document (known as the ‘CanMeds’ training charter) which emphasizes that rather than just training pure clinical skills, contemporary training of clinical doctors should aim to create graduates who are all of medical experts, communicators, collaborators, managers, health advocates, scholars and professionals – in other words a far more ‘gestalt’ and ‘holistically’ trained medical graduate. This CanMeds document has created ‘waves’ in the medical training community, and is now used by many medical schools around the world now as their training ‘template’. Timothy Smith, senior staff writer for the American Medical Association, published an interesting article recently where he suggested that similar changes were occurring in the top medical schools in the USA, with clinical training including earlier exposure to patient care, more focus on health systems and sciences (including wellness and ‘exercise is medicine’ programs), shorter time to training completion and increased emphasis on using new communication technologies more effectively as part of training. In my last role as Head of the School of Medicine at the University of the Free State, working with Faculty Dean Professor Gert Van Zyl, Medical Program Director Dr Lynette Van Der Merwe, Head of Family Medicine Professor Nathanial Mofolo, Professor Hanneke Brits, Dr Dirk Hagemeister, and a host of other great clinicians and administrators working at the University or the Free State Department of Health, the focus on the training program was shifted to try to include a greater degree of community based education as a ‘spine’ of training rather than as a two week block in isolation, along with a greater degree of inter-professional education (working with nurses, physiotherapists, and other allied health workers in teams as part of training to learn to treat a patient in their ‘entirety’ rather than as just a single clinical ‘problem’), and an increased training of ‘soft skills’ that would assist medical graduates not only with optimal long term patient care, but also with skills such as financial and business management capacity so that they would be able to run practices optimally, or at least know when to call in experts to assist them with non-clinical work requirements, amongst a host of other innovative changes. We, like many other Universities, also realized that it was important to try and recruit medical students from the local communities around the medical school in which they grew up, and to encourage as many of these locally based students as possible to apply for medical training, though of course selection of medical students is always a ‘hornets nest’, and it is very challenging to get it right balancing marks, essential skills and community needs of the many thousands of aspirant clinicians who wish to do medicine when so few places are available to offer them.

All of these medical training initiatives to try and initiate changes of what has become a potentially ‘skewed’ training system, as described above, are of course ‘straw in the wind’ without government backing and good strategic planning and communication by country-wide health boards, medical professional councils, and hospital administrators who manage staffing appointments and recruitment. As much as one needs to change the ‘focus’ and skills of medical graduates, the health structures of a country need to be similarly changed to be ‘focused’ on community needs and requirements, and aligned with the medical training program initiatives, for the changes to be beneficial and to succeed. Such training program changes and community based intervention initiatives have substantial associated costs which need to be funded, and therefore there is a large political component to both clinical training and health provision. In order to strategically improve the status quo, governments can choose to either encourage existing medical schools to increase student numbers and encourage statutory clinical training bodies to enact changes to the required medical curriculum to make it more GP focused, or build more medical schools to generate a greater number of potential GP’s. They can also pay GP’s higher salaries, particularly if they work in rural communities, or ensure better conditions of service and increased numbers of allied health practitioners and health assistants to lighten the stress placed on GP’s, in order to ensure that optimal community clinical facilities and health care provision is provided for. But, how this is enacted is always challenging, given that different political parties usually have different visions and strategies for health, and changes occur each time a new political party is elected, which often ‘hinders’ rather than ‘enacts’ required health-related legislation, or as in the case of contemporary USA politics, attempts to rescind previous change related healthcare acts if they were enacted by an opposition political party. There is also competition between Universities which have medical schools for increases in medical places in their programs (which result in more funding flowing in to the Universities if they take more students) and of course any University that wishes to open a new medical school (as my current employers, the University of Waikato wish too, and who have developed an exciting new community focused medical school strategic plan that fulfills all the criteria of what a contemporary focused GP training program should be, that will surely become an exemplary new medical school if their plan is approved by the government) is regarded as a competition for resources by those Universities who already run medical training programs and medical schools. Because of these competition-related and political issues, many major health-related change initiatives for both medical training programs and the related community and state structural training requirements are extremely challenging to enact, and are why so many planned changes become ‘bogged down’ by factional lobbying either before they start or when they are being enacted. This is often disastrous for health provision and training, as chaos ensues when a ‘half-changed’ system becomes ‘stuck’ or a new political regime or health authority attempts to impose further, often ‘half-baked’ changes on the already ‘half-changed’ system, which results in an almost unmanageable ‘mess’ which is sadly often the state of many countries medical training, physician supply, and health facilities, to the detriment both of patients and communities which they are meant to serve and support.

The way forward for clinical medical training and physician supply is therefore complex and fraught with challenges. But, having said this, it is clear that changes are needed, and brave folk with visionary thinking and strategic planning capacity are required to both create sound plans that integrate all the required changes across multiple sectors that are needed for the medical training changes to be able to occur, and to enact them in the presence of opposition and resistance, which is always the case in the highly politicized world of health and medical training. Two good examples of success stories in this field were the changes to the USA health and medical training system which occurred as a result of the Flexner report of 1910, which set out guidelines for medical training throughout the USA, which were actually enacted and came to fruition, and the development of the NHS system in the UK in the late 1940’s, which occurred as a result of the Beveridge report of 1942, which laid out how and why comprehensive, universal and free medical services were required in the UK, and how these were to be created and managed, and these recommendations were enacted by Clement Attlee, Aneurin Bevin and other members of the Labour government of that time. Both systems worked for a time, but sadly both in the USA and UK, due to multiple reasons and perhaps natural system entropy, both of these countries health services are currently in a state of relative ‘disrepair’, and it is obvious that major changes to them are again needed, and perhaps an entire fresh approach to healthcare provision and training similar to that initiated by the Flexner and Beveridge reports are required. However, it is challenging to see this happening in contemporary times with the polarized political status currently occurring in both countries, and strong and brave health leadership is surely required at this point in time in these countries, as always, in order to initiate the substantial strategic requirements which are required to either ‘fix’ each system or create an entirely new model of health provision and training. Each country in the world has different health provision models and medical training systems, which work with varying degrees of success. Cuba is an example of one country that has enacted wholesale GP training and community medicine as the centerpiece of both their training and health provision, though some folk would argue that they have gone too far in this regard in their training, as specialist provision and access is almost non-existent there. Therein lies an important ‘rub’ – clearly there is a need for more GP and community focused medical training. But equally, it is surely important that there is still a strong ‘flow’ of specialists and super-specialists to both train the GP’s in the specific skills of each different discipline of medicine, and to treat those diseases and disorders which require specialist-level technical skills. My own recent health scare exemplifies the ‘yin and yang’ of these conflicting but mutually beneficial / synergistic requirements. If it were not for the presence of a super-specialist with exceptional technical skills, I might not be alive today. Equally the first person I phoned when I noted concerning symptoms was not a super-specialist, but rather was my old friend and highly skilled GP colleague from my medical training days, Dr Chris Douie, who lives close by to us and who responded to my request for assistance immediately. Chris got the diagnosis spot on, recommended the exact appropriate intervention, and sent me on to the required super-specialist, and was there for me not just to give me a clinical diagnosis but also to provide pastoral care – in other words ‘hold my hand’ and show me the empathy that is so needed by any person when they have an unexpected medical crisis. In short, Chris was brilliant in everything he did as first ‘port of call’, and while I eventually required super-specialist treatment of the actual condition, in his role as GP (and friend) he provided that vital first phase support and diagnosis, and non-clinical empathic support, which is so needed by folk when they are ill (indeed historically the local GP was not just everyone’s doctor but also often their friend). My own example therefore emphasizes this dual requirement for both GP and specialist health provision and capacity.

Like most things, medical training and health care provision has like a pendulum ‘swung’ between specialist and generalist requirements and pressures in the last century. The contemporary perception, in an almost ‘back to the future’ way, is that we have perhaps become too focused on high technology clinical skills and training (though as above there will always be a place and need for these), and we need more of our doctors to be trained to be like their predecessors of many years ago, working out in the community, caring for their patients and creating an enduring life-long relationship with them, and dealing with their problems early and effectively before they become life-threatening and costly to treat and requiring the intervention of expensive specialist care. It’s an exciting period of potential world-wide changes in medical training and the clinical health provision to communities, and a great time to be involved in either developing the strategy for medical training and health provision and / or enacting it – if the folk involved in doing so are left in peace by the lobby groups, politicians and folk who want to maintain the current unbalanced status quo due to their own self-serving interests. Who knows, maybe even clinicians, like in the old days, will be paid again by their patients with a chicken, or a loaf of freshly baked bread, and goodwill will again be the bond between the community, the folk who live in them, and the doctors and healthcare workers that treat them. And for my old GP friend Chris Douie, who is surely the absolute positive example and role model of the type of doctor we need to be training, a chicken will heading his way soon from me, in lieu of payment for potentially saving my life, and for doing so in such a kind and empathetic way, as surely any GP worth his or her ‘salt’ would and should do!


Muscle Dysmorphia And The Adonis Complex – Mirror, Mirror On The Wall, Why Am I Not The Biggest Of Them All

I have noticed recently that my wonderful son Luke, who is in the pre-teenage years, has become more ‘aware’ of his body and discusses things like ‘six-pack abs’ and the need to be strong and have big muscles, probably like most boys of his age. I remember an old colleague at the University of Free State mention to me that her son, who was starting his last year at school, and who was a naturally good sports-person, had started supplementing his sport with gym work as he perceived that ‘all boys his age were interested in having big muscles’, as my colleague described it. A few decades ago, my old colleague and friend Mike Lambert, exercise physiologist and scientist without peer, and I did some work researching the effect of anabolic steroid use on bodybuilders, and noted that there were not just physical but also psychological changes in some of the trial participants. I did a fair amount of time in the gym in my University days, and always wondered why some of the biggest folk in the gym seemed to do their workouts with long pants and tracksuit tops, sometimes with hoods up, even on hot days, and how in conversation with them I was often told that despite them being enormous (muscular rather than obese-wise), they felt that they were small compared to their fellow bodybuilders and weightlifters, and that they needed to work harder and longer in the gym than they were currently doing to get results. All of these got me thinking of the fascinating syndrome known as muscle dysmorphia, also known as the Adonis complex, ‘bigorexia’, or ‘reverse anorexia’ and what causes the syndrome / disorder in the folk that develop it.

Muscle dysmorphia is a disorder mostly affecting males (though females can also be affected) where there is a belief or delusion that one’s body is too small, thin, insufficiently muscular or lean, despite it often being normal or exceptionally large and muscular, and related to obsessional efforts to increase muscularity and muscle mass by weightlifting exercise routines, dietary regimens and supplements, and often anabolic steroid use. This perception of being not muscular enough becomes severely distressing for the folk suffering from the syndrome, and the desire to enhance their muscularity eventually impacts negatively on the sufferer’s daily life, work and social interactions. The symptoms usually begin in early adulthood, and are most prevalent in body-builders, weight-lifters, and strength-based sports participants (up to 50 percent in some bodybuilder population studies, for example). Worryingly, muscle dysmorphia is increasingly being diagnosed in younger / adolescent folks, and across the spectrum of sports participants, and even in young folk who begin lifting weights for aesthetic rather than sport-specific purposes, and who from the start perceive they need to go to gym to improve their ‘body beautiful’. Two old academic friends of mine, Dave Tod and David Lavallee, published an excellent article on muscle dysmorphia a few years ago, where they suggested that the diagnostic criteria for the disorder are that the sufferer needs to be pre-occupied with the notion that their bodies are insufficiently lean and muscular, and that the preoccupation needs to cause distress or impairment in social or occupational function, including at least two of the four following criteria: 1) they give up / excuse themselves from social, occupational or recreational activities because of the need to maintain workout and diet schedules; 2) they avoid situations where their bodies may be exposed to others, or ‘endure’ such situations with distress or anxiety; 3) their concerns about their body cause distress or impairment in social, occupational or other areas of their daily functioning; and 4) they continue to exercise and monitor their diet excessively, or use physique-enhancing supplements or drugs such as anabolic steroids, despite knowledge of potential adverse physical or psychological consequences of these activities. Folk with muscle dysmorphia spend a lot of their time agonizing over their ‘situation’, even if it is in their mind rather than reality, look at their physiques in the mirror often, and are always of the feeling that they are smaller or weaker than what they really are, so there is clearly some cognitive dissonance / body image problem occurring in them.

What causes muscle dysmorphia is still not completely known, but what is telling is that it was first observed as a disorder in the late 1980’s and early 1990’s, and was first defined as such by Harrison Pope, Katharine Phillips, Roberto Olivardia and colleagues in a seminal publication of their work on it in 1997. There are no known reports of this disorder from earlier times, and as suggested by these academics, it’s increasing development appears to be related a growing social obsession with ‘maleness’ and muscularity, that is evident in the media and marketing adverts of and for the ‘ideal’ male in the last few decades. While women have had relentless pressure on them from the concept of increasing ‘thinness’ as the ‘ideal body’ perspective for perhaps a century or longer from a social media perspective, with for example the body size of female models and advertised clothes sizes decreasing over the years (and it has been suggested that in part this is responsible for the increase in the prevalence in anorexia nervosa in females), it appears that males are now under the same marketing / media ‘spotlight’, but more from a muscularity rather than a ‘thinness’ perspective, with magazines, newspapers and social media often ‘punting’ this muscular ‘body ideal’ for males when selling male-targeted health and beauty products. Some interesting changes have occurred which appear to support this concept, for example the physique of GI-Joe toys for young boys changing completely in the last few decades, apparently being much more muscular in the last decade or two compared to their 1970 prototypes. Matching this change, in 1972 only 15-20 percent of young men disliked their body image, while in 2000 approximately 50% percent of young men disliked their body image. Contemporary young men (though older men may also be becoming increasingly ‘caught up’ in similar desire for muscularity as contemporary culture puts a price on the ‘body beautiful’ right through the life cycle) perceive that they would like to have 13 kg more muscle mass on average, and believe that women would prefer them to have 14 kg more muscle mass to be most desirable, though interestingly when women were asked about this, women were happy with the current mass of their partners, and many were indeed not attracted to heavily-muscled males. Therefore, it appears that social pressure may play a large part in creating an environment where men perceive their bodies in a negative light, and this may in turn lead to the development of a ‘full blown’ muscle dysmorphia syndrome in some folk.

While the concept that social pressure plays a big role in the development of muscle dysmorphia, other factors have also been suggested to play a part. Muscle dysmorphia is suggested to be associated with, or indeed a sub-type of, the more general body dysmorphic disorder (and anorexia nervosa, though of course anorexia nervosa is about weight loss, rather than weight gain), where folk develop a pathological dislike of one or several body parts or components of their appearance, and develop a preoccupation with hiding or attempting to fix their perceived body flaw, often with cosmetic surgery (and this apparently affects up to 3 percent of the population). It has been suggested that both muscle dysmorphia and body dysmorphic disorder may be caused by a problem of ‘somatoperception’ (knowing one’s own body), which may be related to organic lesions or processing issues in the right parietal lobe of the brain, which is suggested to be the important area of the brain for own-body perception and the sense of self. In folk that have lesions of the right parietal cortex, they perceive themselves to be ‘outside’ of their body (autoscopy), or that body parts are missing / there is a lack of awareness of the existence of parts of the body (asomatognosia). Non-organic / psychological factors have also been associated with muscle dysmorphia, apart from media and socio-cultural influences, including being a victim of childhood bullying, being teased about levels of muscularity when young, or being exposed to violence in their family environment. It has also been suggested that it is associated with appearance-based rejection sensitivity, which is defined as anxiety-causing expectations of social rejection based on physical appearance – in other words, for some reason, folk with muscle dysmorphia are anxious that they will be socially rejected due to their perceived lack of muscularity and associated appearance deficits. Whether this rejection sensitivity is due to prior negative social interactions, or episodes of childhood teasing or body shaming, has not been well elicited. Interestingly, while studies have reported inconclusive correlations with body mass index, body fat, height, weight, and pubertal development age, there have been strong correlations reported with mood disorders, anxiety disorders, perfectionism, substance abuse, and eating and exercise-dependence / addiction disorders, as well as with the clinical depression, anxiety, and obsessive-compulsive disorders. There does not appear to be a strong relationship to narcissism, which perhaps is surprising. Whether these are co-morbidities or they have a common pathophysiology at either a psychological or organic level is yet to be determined. It has been suggested that a combination of cognitive behavioural therapy and selective serotonin reuptake inhibitor prescription (a type of antidepressant) may improve the symptoms of muscle dysmorphia. While these treatment modalities would support a link between muscle dysmorphia and the psychological disorders described above, the efficacy of these treatment choices is still controversial, and there is unfortunately a high relapse rate. It is unfortunately a difficult disorder to ‘cure’, given that all folk need to eat regularly in order to live, and most folk incorporate exercise into their daily routines, which make managing ‘enough’ but not ‘excessive’ amounts of weightlifting and dietary regulation difficult to regulate in folk who have a disordered body image.

Muscle dysmorphia appears therefore to be a growing issue in contemporary society, which is increasing in tandem with the increased media-related marketing drive for the male ‘body beautiful’, which now appears to be operating at a similar level to the ‘drive for thinness’ media marketing which has blighted the female perception of body image for a long time, and has potentially led to an increased incidence of body image disorders such as anorexia nervosa and body dysmorphic syndrome. However none of these are gender specific, and it is not clear how much of a relationship these body image disorders have with either organic brain or clinical psychological disorders, as described above. It appears to be a problem mostly in young folk, with older folk being more accepting of their body abnormalities and imperfections, whether these are perceived or real, though sadly it appears that there is a growing incidence of muscle dysmorphia and other body image disorder in older age, as societies relationship and expectations of ‘old age’ changes. As I see my son become more ‘interested’ in his own physique and physical development, which must have obviously been caused by either discussions with his friends, or due to what he reads, or what the ‘actors’ in his computer games look like which he so enjoys playing, like all his friends, I hope he (and likewise my daughter) will always enjoy his sport but have a healthy self-image through the testing teenage and early adult period of time. I remember those bodybuilders my colleague Mike and I worked with all those years ago, and how some of them were comfortable with their large physiques, while with some it was clearly an ordeal to take off their shirts in order to be tested in the lab as part of the trials we did back then. The mind is very sensitive to suggestion, and it is fascinating to see that males now are being ‘barraged’ with advertising suggesting they are not good enough, and if they buy a certain product it will make them stronger, fitter, better, and thus more attractive, to perhaps the same level females have been subjected to for a long period of time. The mind is also sensitive to bullying, teasing and body shaming, as well as a host of other social issues which impinge on it particularly in its childhood and early adolescent development phases. It’s difficult to know where this issue will ‘end’, and whether governmental organizations will ‘crack down’ on such marketing and media hype which surely ‘target’ folks (usually perceived) physical inadequacies or desires, or if it is too late to do so and such media activity has become innate and part of the intrinsic fabric of our daily life and social experience. Perhaps education programs are the way to go at school level, though these are unfortunately often not successful.

There are so many daily challenges one has to deal with, it may seem almost bizarre that folk can spend time worrying about issues that are not even potentially ‘real’, but for the folk staring obsessively at themselves in the mirror, or struggling to stop the intrusive thoughts about their perceived physical shortcomings, these challenges are surely very real, and surely all-consuming and often overwhelming. In Greek mythology Adonis was a well-muscled half man, half god, whose was considered to be the ultimate in masculine beauty, and according to mythology his masculine beauty was so great that he won the love of Aphrodite, the queen of all the gods, because of it. Sadly for the folk with muscle dysmorphia, while they may be chasing this ideal, they are likely to be too busy working on creating their own perfect physique to have time to ‘woo’ their own Aphrodite, and indeed, contemporary Aphrodite’s don’t appear to even appreciate the level of muscularity they eventually obtain. The mirror on the wall, as it usually is, is a false siren, beckoning those weak enough to fall into its thrall – no matter how big, never to appear as the biggest or most beautiful of all.


Consistency Of Task Outcome And The Degrees Of Freedom Problem-The Brain Is Potentially Not A Micro-Manager When Providing Solutions To Complex Problems

Part of the reason I enjoy cycling as my chosen sport now I am older is not just because it is beneficial from a health perspective, but because the apparent regularity of the rhythmical circular movement required for pedalling creates a sense of peace in me and paradoxically allows my mind to wander a bit away from its routine and usually work-focussed and life task orientated thoughts. I enjoy watching competitive darts, from the perspective of marvelling at how the folk participating in the competitions seem to so often hit the small area of the board they are aiming for with such precision, after fairly rapidly throwing their darts when it is their turn to do so. This week an old colleague and friend from University of Cape Town days, Dr Angus Hunter, published some interesting work on how the brain controls muscle activity during different experimental conditions, a field of which he is a world expert in, and it was great to read about his new research and innovative ideas as always. Some of the most fun times of my research career were spent in the laboratory working with Angus measuring muscle activity during movement related tasks, where one of our most challenging issues to deal with was the variability of the signal our testing devices recorded when measuring either the power output from, or electrical activity in, muscle fibres each time they contracted when a trial participant was asked to do the same task. A large part of the issue we had to solve then was whether this was signal ‘noise’ and an artefact of our testing procedures, or if it was part of the actual recruitment strategy the brain used to control the power output from the muscles. All of these got me thinking about motor control mechanisms, and how movement and activity is regulated in a way that gets tasks done in a seemingly smooth and co-ordinated way, often without us having to think about what we are doing, while when one measures individual muscle function it is actually very ‘noisy’ and variable, even during tasks which are performed with a high degree of accuracy, and how the brain either creates or ‘manages’ this variability and ‘noise’ to generate smooth and accurate rhythmical or target-focussed activity, as that which occurs when cycling and throwing darts respectively.

Some of the most interesting scientific work that I have ever read about was done by Nikolai Bernstein, a Russian neurophysiologist, who when working in the 1920’s at the somewhat euphemistically named Moscow Central Institute of Labour, examined motor control mechanisms during movement. As part of the communist government of the times centrally driven plans to improve worker productivity and output, Bernstein did research on manual labour tasks such as hammering and cutting, in order to try and understand how to optimise it. Using novel ‘cyclogram’ photography techniques, where multiple pictures were taken of a worker using a hammer or chisel to which a light source had been attached, he was able to produce the astonishing observation that each time the worker hit a nail or cut through metal, their arm movements were not identical each time they performed the action, and rather that there was a great degree of variability each time the similar action was performed, even though usually this variability in action produced an outcome which had a high degree of accuracy. He realized that each complete movement, such as moving the arm towards the target, is made up of a number of smaller movements of muscles around the shoulder, elbow and wrist joints, which together synergistically create the overall movement. Given how many muscles there are in the arm, working around three joints (and potentially more when one thinks of the finger joints and muscles controlling them), he suggested that were a very large number of potential combinations of muscle actions and joint positions that could be used for the same required action, and a different combination of these appeared to be ‘chosen’ by the brain each time it performed a repetitive task. From a motor control perspective, Bernstein deduced that this could potentially cause a problem for the brain, and whatever decision-making process decided on which movement pattern it would use to complete a task, given that it created a requirement for choosing a particular set of muscle synergies from a huge number of different options available, or in contrast not choosing all the other muscle synergistic options, each time the individual was required to perform a single task or continue performing a repetitive task. This would require a great amount of calculation and decision-making capacity on a repetitive basis by the brain / control processes, and he called this the motor redundancy, or degrees of freedom, problem.

Like a lot of work performed in the Stalin era in Russia, his fascinating work and observations did not become known to Western scientists until the 1960’s, when he published a text-book of his career in science, which was subsequently translated and taken forward by excellent contemporary movement control scientists like Mark Latash of the University of Pennsylvania State in the USA. Further studies have supported Bernstein’s earlier work, and it is astonishing how much variability there is in each movement trajectory of a complex action that is goal orientated. Mark has suggested that this is not a redundancy problem, but rather one of abundancy, with the multiple choices available being of benefit to the body of any individual performing repetitive tasks, potentially from a fatigue resistance and injury prevention perspective, which may occur if the same muscle fibres in the same muscle are used in the same way in a repetitive manner. Interestingly, when a person suffers a stroke or a traumatic limb injury, the quantity of movement variability appears to paradoxically reduce rather than increase after the stroke or injury, and this reduced variability of motor function is associated with a decrement in task performance accuracy and completion. Therefore, the high variability of movement patterns in healthy folk appears to paradoxically make task performance more accurate and not just more efficient.

How control processes choose a specific ‘pattern’ of muscle activity for a specific task is still not well known. A number of theories have been proposed (generally as a rule in science, the more theories there are about something, the more the likelihood there is that there is no clarity about it) with some quaint names, such as the equilibrium point hypothesis, which suggests that choice at the motor neuron level is controlled as part of the force-length relationship of the muscle; the uncontrolled manifold hypothesis, which suggests that the central nervous system focuses on the variables needed to control a task and ignores the rest (the uncontrolled manifold being those variables that do not affect task required activity); and the force control hypothesis, which suggests that the central nervous systems compares the required movement for the task against internal models, and then uses calculations and feedforward and feedback control mechanisms to direct activity against that set by the internal model; amongst others. All these are interesting and intellectually rigorous theories, but don’t tell us very much about exactly how the brain chooses a particular group of muscles to perform a task, and then subsequently a different group of muscles, which use a different flight trajectory, to perform the task again when it is repeated. It has been suggested that there are ‘synergistic sets’ of muscles which are chosen in their entirety for a single movement, and that the primitive reflexes or central pattern generators in the spinal cord may be involved. But the bottom line is that we just do not currently know exactly what control mechanism chooses a specific set of muscles to perform one movement of a repetitive task, why different muscles are chosen each time the same task is performed sequentially, or how this variable use of muscles for the same task is managed and controlled.

We have previously suggested that a number of other activities in the body beyond that of muscle control have similar redundancy (or abundancy) in how they are regulated, or at least in respect of which mechanisms are used to control them. For example, blood glucose concentrations can be controlled not only by changes in insulin concentrations, but also by that of glucagon, and can also be altered by changes in catecholamine (adrenaline or noradrenaline) or cortisol levels, and indeed by behavioural factors such as resisting the urge to eat. Each time blood glucose concentrations are measured, the concentrations of all these other regulatory hormones and chemicals will be different ratio-wise to each other, yet their particular synergistic levels at any one point in time maintains the level of blood glucose concentrations at homeostatically safe setpoint levels. The blood glucose level is maintained whatever the variability in the regulatory factor concentration ratios, and even though this variability in choice of control mechanisms similarly creates a potential for high computational load when managing blood glucose concentrations from a control perspective. Similarly, perception of mood state or emotions are thought to have redundancy in what factors ‘creates’ them. For example we can fairly accurately rate when we feel slightly, moderately or very fatigued, but underpinning the ‘feeling’ of fatigue at the physiological level can be changes in blood glucose, heart rate, ventilation rate, and a host of other metabolites and substrates in the body, each of which can be altered in a variable ratio way to make up the sensation of fatigue we rate as slightly, moderately or very high levels of fatigue. Furthermore, fatigue is a complex sensation made up of individual sensations such as breathlessness, pounding chest, sweating, pain, and occasionally confusion, dizziness, headache and pins and needles, amongst others, a combination of which can also be differently valenced to provide a similar general fatigue rating by whoever is perceiving the sensation of fatigue. To make it even more complex, the sensation of fatigue is related to inner voices which either rate the sensation of fatigue (the ‘I’ voice) or make a judgement on it related to social circumstances or family and environmental background (the ‘Me’ voice), and it is through the final combination of these that an individual finally rates their level of fatigue, which adds another level of redundancy, or abundancy, to the factors underpinning how the ‘gestalt’ sensation of fatigue is both created and perceived. There are therefore three potential ‘levels’ of redundancy / abundancy in the signals and factors which either individually or collectively make up the ‘gestalt’ sensation of fatigue, and a corresponding increased level of computational requirements potentially associated with its final genesis, and how this perceptual redundancy / abundancy is managed by the control mechanisms which generate them is still not well known.

In summary, therefore, the presence of variability during activities of daily living across a number of different body systems is not only ‘noise’ / artefacts of testing conditions which are challenges for us researchers to have to deal with, it also appears to be part of some very complex control mechanisms which must have some teleological benefit both for optimizing movement and activity, and ensuring the capacity to sustain it without fatigue or injury to the components of the mechanism which produces it. Each time I cycle on my bike and my legs move up and down to push the wheels forward, different muscles are being used in a different way during each rotation of the wheel. Each time a darts player throws a dart, different muscle synergies are used to paradoxically create the accuracy of their throw. There is real ‘noise’ that a researcher has to remove from their recorded traces after a testing session in a laboratory, such as that caused by the study participant sweating during the trial, which can affect electrophysiological signals, and there is always a degree of measurement error, and therefore some degree of ‘noise’ is present in the variability of the recorded output for any laboratory technique that measures human function. But, equally, Bernstein’s brilliant work and observations all those years ago helped us understand that variability is inherent in living systems, and after understanding this, each time I observe data, particularly that generated during electrophysiological work such as I have used for a number of experiments in my own research career, including electromyography (EMG), electroencephalography (EEG) or transcranial magnetic stimulation (TMS), which has low standard deviations in the results sections of published research articles, I do wonder at the validity of the data and whether it has been ‘paintbrushed’ by the researchers who describe it, as my old Russian neurophysiology research colleague Mikhail Lomarev used to describe it, when he or we thought data was ‘suspect’. The inherent variability in brain and motor control systems makes finding statistical significance in results generated using routine neurophysiological techniques more difficult. It also seems to create a huge increase in the requisite control-related calculations and planning for even a simple movement, though as Mark Latash suggested, the brain is likely to not be a micro-manager, but rather some effective parsing mechanism which can both generate and utilize a large number of synergistic movement patterns in a variable manner for any task, while not utilizing much decision making power using some sort of heuristic-based decision-making mechanism. Most importantly though, it fills one with a sense of awe at the ‘magic’ of our own body, and for the level of complexity involved in both its creation and operative management, when even a simple movement like striking an object with a hammer, or cutting a piece of metal, can be underpinned by such complex control mechanisms that our brains cannot currently comprehend or make sense of.

In a laboratory in the middle of Russia nearly a century ago, Nikolai Bernstein made some astonishing observations by doing exceptional research on basic motor control, while trying to increase the productivity of soviet-era industrial work. A century later we are still scratching our heads trying to understand what his findings mean from a motor control perspective. As I type these final sentences, I reflect on this, and wonder which synergistic composition of muscle activity in my fingers are responsible for creating the actions which lead to these words being generated, and realize that each time I do so, because of the concepts of variability, redundancy and abundancy, I will probably never use an identical muscle sequence when typing other ideas into words at another future point in time. But then again, I guess the words I will be writing in the future will also be different, and daily life, like motor control programs, will always vary, always change, even though the nail on the wall on which the picture hangs becomes a permanent ‘item’, as will this article become permanent when I hit the ‘send’ button to publish it. What is never to be seen again though are the traces in the ‘ether’ of the hammer blow which embedded the nail in the wall, and the exact movement of the individual muscles in the labourers arms and hands, and in my fingers as I typed which created these words. Like magic their variability was created, and like magic their pattern has dispersed, never to recur again in the same way or place, unless some brilliant modern day Bernstein can solve their magic and mystery, reproduce them in their original form using some as yet to be invented laboratory device, and publish them in a monograph. Let’s hope that if they do so, their great work does not languish unseen for forty years before being discovered by the rest of the world’s scientists, as was Bernstein’s wonderful observations of all those years ago!


The Core Requirement And Skill Of Decision-Making In Life – Removal Of Uncertainty Is Usually Positive And Cathartic But Is Also An Ephemeral Thing

This week, for the first time since moving to New Zealand and starting a new job here, I cycled in to work, and in the early afternoon faced a tough decision regarding whether I had the level of fitness capacity to cycle back home at the end of the day. Three-quarters of the way through the ride home, I felt very tired and stopped by the side of the road, and considered phoning home and asking them to pick me up. This morning I opened the fridge and had to decide whether to have the routine fruit and yogurt breakfast or the leftover piece of sausage roll. We have been six months in our new life and job here, and we have come to that period of time of deciding whether we have made a good decision and to continue, or whether we have made a disastrous error and need to make a rapid change. As I write this my wife asks me if I planned to go to the shop later, and if so whether I could get some milk for the family, and I had to stop writing and decide on whether I was indeed going to do so as part of the weekend post-writing chores, or not. All of these activities and issues required me to make decisions, and while some of them appeared to be of little consequence, some of them were potentially life and career changing, and, even if it seems a bit dramatic, potentially life-ending (whether to continue cycling when exhausted as a fifty-something). Decisions like these have to be made by everyone on a minute by minute basis as part of their routine daily life. The importance of decision-making in our daily lives, and how we make decisions, is still controversial and not well understood, which is surprising, given how much our optimal living condition and indeed survival depends on making correct decisions, and how often we have to make decisions, some of which are simple, some of which appear simple but are complex, and some of which are overtly complex.

Decision-making is defined as the cognitive process (which is the act or process of knowing or perceiving) resulting in the selection of a particular belief or course of action from several alternative possibilities, or as a problem-solving activity terminated by the genesis or arrival of a solution deemed to be satisfactory. At the heart of any decision-making is the requirement to choose between an array of different options, all of which usually have both positive and negative potential attributes and consequences, where one uses prior experience or a system of logical ‘steps’ to make the decision based on forecasting and scenario-setting for each possible alternative choice and consequence of choosing them. One of the best theoretical research articles on decision-making I have read / been involved with is one written by Dr Andy Renfree, an old colleague from the University of Worcester, and one of the Sport Science academic world’s most creative thinkers. As a systems level, he suggested that decisions are made based on either rational or heuristic principles, the former working best in ‘small world’ environments (in which the individual making the decision has absolute knowledge of all decision-related alternatives, consequences and probabilities), and the latter best in ‘large world’ environments (in which some relevant information is unknown or estimated). As described by Andy, rational decision-making is based on the principle that decisions can only be made if certain criteria are met, namely that the individuals making the decision must be faced with a set of behavioral alternatives and, importantly, information must be available for all possible alternatives of decisions that can be made, as well as of the statistical probability of all of the outcomes of the choices that can be made. This is obviously a large amount of requisite information, and a substantial period of time would be required to make a decision based on such ‘rational’ requirements. While using this method would likely be the most beneficial from a correct outcome perspective, it would also potentially place a high demand on the cognitive processes of the individual making the decision. Bayesian decision-making is a branch of rational decision-making theory, and suggests that decision-making is the result of unconscious probabilistic inferences. In Bayesian theory, a statistical approach to decision-making is made based on prior experience, with decision making valenced (and therefore speeded up) by applying a ‘bias’ towards information that is used to make the decision which is believed to be more ‘reliable’ than other information, and ‘probability’ of outcomes being better or worse based on prior experience. Therefore, in the Bayesian model, prior experience ‘speeds up’ decision making, though all information is still processed in this model.

In contrast, heuristic decision-making is a strategic method of making decisions, which ignores information that is available but is perceived to be less relevant to the specific decision being made, and which suggests that decisions are made based on key information and variables that are assessed and acted upon rapidly, in a manner that, as Andy suggests, incorporates ‘rule of thumb’ or ‘gut feel’ thinking, which places less demands on the cognitive thinking processes of the individual. As described above, rational decision-making may be more relevant in ‘small world’ environments, in which there are usually not a lot of variables or complexity which are required to be assessed prior to making a decision, and heuristic thinking in ‘large world’ environments, which are complex environments where all information, whether relevant or not, cannot be known, due to the presence not only of ‘known unknowns’ but also ‘unknown unknowns’, and where an individual would be potentially immobilized into a state of ‘cognitive paralysis’ if attempting to assess every option available. The problem or course is that even decisions that appear simple often have multiple layers of complexity that are not overt and of which the individual thinking about them is not aware, and it can be suggested that the concept of both rational and ‘small world’ environments are potentially abstract principles rather than reality, that all life occurs as part of ‘large world’ environments, and that heuristic processes are what are used by individuals as the main decision-making principles during all activities of daily living.

Of course, most folk would perceive that these rational and heuristic models are very computational and mathematical based, and that perhaps ‘feelings’ and ‘desires’ are also a component of decision-making, or at least these are how decision-making is perceived to ‘feel’ to them. As part of the Somatic Marker hypothesis, Antonio Damasio suggested that ‘body-loop’ associated emotional processes ‘guide’ (and have the potential to bias) decision-making behavior. In his theory, somatic markers are a specific ‘group of feelings’ in the body and are associated with specific emotions one perceives when confronted with, and are related to, the facts or choices one is faced with and need to make a decision about. There is suggested to be a different somatic marker for anxiety, enjoyment, or disgust, among other emotions, based on an aggregation of body-related symptoms for each, such as heart rate changes and the associated feeling of a pounding chest, the sensation of breathing changes, changes in body temperature, increased sweat rate, or the symptom of nausea, some or all of which together are part of a certain somatic marker group which creates the ‘feeling’ of a particular emotion. Each of these physiologically based body-loop ‘states’ are capable of being components of different somatic marker ‘groups’, which create the distinct ‘feelings’ which are associated with different emotions, and which would valence decisions differently depending on which somatic marker state / emotion is created by thinking of a specific option or choice. This hypothesis is based on earlier work by William James and colleagues more than a hundred years ago, which became the James-Lange theory of emotion, which suggests there is a ‘body-loop’ required for the ‘feeling’ of emotions in response to some external challenge, which is in turn required for decision-making processes related to the external challenge. The example used to explain this theory was that when one sees a snake, it creates a ‘body loop’ of raised heart rate, increased sweating, increased breath rate and the symptom of nausea, all of which in turn create the ‘feeling’ of fear once these ‘body-loop’ symptoms are perceived by the brain, and it was hypothesized that it is these body-generated feelings, rather than the sight of the snake itself, which induces both the feeling of fear and the decision to either rapidly run away or freeze and hope the snake moves away. While this model is contentious as it would make reactions occur slower than if a direct cognitive decision-making loop occurred, it does explain the concept of a ‘gut feel’ when decision-making. Related to this ‘body-loop’ theory, are other behavioral theories about decision-making, and it has been suggested that decisions are based on what the needs, preferences and values of an individual are, such as hunger, lust, thirst, fear, or moral viewpoint, but of course all of these could equally be described as components of either a rational or heuristic model, and psychological / emotional and cognitive / mathematical models of decision-making are surely not mutually exclusive conditions or theories.

These theories described above attempt to explain how and why we make decisions, but not what causes decisions to be right or wrong. Indeed, perhaps the most relevant issue to most folk is why they so often get decisions wrong. A simple reason may be that of ‘decision fatigue’, whereby the quality of decision-making deteriorates after a prolonged period of decision-making. In other words, one may simply ‘run out’ of the mental energy which is required to make sound decisions, perhaps due to ongoing changes in ‘somatic markers’ / body symptoms each time a decision is required to be made, which creates an energy cost that eventually ‘uses up’ mental energy (whatever mental energy is) over the period of time sequential decisions are required to be made. Astonishingly, judges working in court have been shown to make less favorable decisions as a court session progresses, and the number of favorable decisions improves after the judges have had a break. Apart from these data suggesting that one should ask for a court appearance early on in the morning or after a break, it also suggests that either physical or mental energy in these judges is finite, and ‘runs out’ with prolonged effort and the use of energy focusing on decision-making related to each case over the time period of a court session. There are other more subtle potential causes of poor-decision making. For example, confirmation bias occurs when folk selectively search for evidence that supports a certain decision that they ‘want’ to make, based on an inherent cognitive bias set in their mind by past events or upbringing, even if their ‘gut’ is telling them that it is the wrong decision. Cognitive inertia occurs when folk are unwilling to change their existing environment or thought patterns even when new evidence or circumstances suggest they should. People tend to remember more recent information and use it preferentially, or forget older information, even if the older information is potentially more valid. Repetition bias is caused by folk making decisions based on what they have been told, if it has been told to them by the greatest number of different people, and ‘groupthink’ is when peer pressure to conform to an opinion or group action causes the individual to make decisions they would not do if they were alone and not in the group. An ‘illusion of control’ in decision-making occurs where people have a tendency to under-estimate uncertainty because of a belief that they have more control over events that they actually have. While folk with anxiety tend to make either very conservative or paradoxically very rash decisions, sociopaths, who are thought to have little or no emotional ‘body-loop’, are very poor at making moral based decisions or judgments. Therefore, there are a whole lot of different factors which can impact negatively on decision-making, either due to one’s upbringing or prior history impacting on the historical memory which is used to valence decisions, or due to one’s current emotional or psychological state having a negative impact on decision-making capacity, and even simple fatigue can be the root cause of poor decision-making.

At the heart of decision-making (excusing the pun, from the perspective of the somatic marker hypothesis), is a desire of most folk to remove uncertainty from their lives, or change their life or situation to a better state or place as a result of their decision, or to remove a stressor from their life that will continue unless they make a decision on how to resolve it, remove it, or remove themselves from whatever causes the stressor. However, during my days as a researcher at the University of Cape Town, we suggested that conditions of uncertainty and certainty associated with information processing and decision-making are cyclical (we called it the ‘quantal packet’ information processing theory, for those interested). A chosen decision will change a position or state of uncertainty to one of certainty as one enacts changes based on the decision (or if one chooses to ‘wait and see’ and not alter anything) from the context that one is certain a change will occur based on what one has decided to do, even if one cannot be sure if this difference will be positive or negative while the changes are being enacted. However, with the passing of time, the effects of the decision made will attenuate, and uncertainty will eventually re-occur which require a further decision to be made, often with similar choices to which occurred when the initial decision was made. Underpinning this attenuation of the period of ‘certainty’ is the concept that although one will have factored in ‘known unknowns’ into any decision one makes using either rational or heuristic principles, ‘unknown unknowns’ will surely always occur that will cause even the best strategic decisions to require tactical adjustments, and those that are proven to be an error will need to be reviewed and changed. One can also ‘over-think’ decision-making as much as one can ‘under-think’ it, as well as being kept ‘hostage’ to cognitive biases from one’s past which continuously ‘trip one up’ when making decisions, despite one’s best intentions. Having said all of this, it often astonishes me not that folk get decisions wrong, but rather that they get so many decisions right. For example, when driving along a highway, one is reliant on the correct decisions of every driver that passes for one’s survival, from how much they choose to turn their steering wheel, to how much they use their brake for a corner, to an awareness in each of them that they are not too tired to be driving in the first place. It’s amazing when one thinks of how many decisions we make, either consciously or unconsciously, which so often turn out right, but equally it is the responsibility of each of us to work on the errors created by our past, or by our emotional state, or by ‘groupthink’, which we need to be vigilant about and remove as best possible from the psyche.

Making a decision is usually cathartic due to the removal of uncertainty and the associated anxiety which uncertainty often causes, even if the certainty and feeling of goodwill generated by making a decision is usually ephemeral and lasts only for a short period of time before other matters occupy one’s attention which require further decision-making. Pondering on my decision-making of the last week retrospectively, I think I made the right decision when choosing to cycle home after work, and to do so all the way home, even if I was exhausted when I got there, given that I did not collapse or have a heart attack when doing so, and there will surely be long term health benefits from two long cycles (though of course long is relative at my age!) in one day. I did choose the healthy food alternative for breakfast this morning, even though often I don’t, particularly during meals when I am tired after a long day’s work. I will get the milk my wife asked me to get this afternoon, in order to both get some fresh air after a creative morning of thinking and writing, and to maintain the harmony in our house and life, even though it is raining hard and I would prefer to be writing more or reading a good book this afternoon. The ‘jury is still out’ about whether this move to New Zealand and a new work role has been a good career and country move, and my current decision on this is to let more time pass before making an action-generating reasoned decision on it, though of course we have already moved several times to new places round the world in the last two decades, and the family is looking forward to some lifestyle stability in the next few years, and these factors need to be part of any reflection on a current-environment rating decision. Each of these decisions seemed ostensibly relatively simple to make when I made them, yet each surely had an associated entire host of different reasons, experiences, memories and requirements which were worked through in and by my mind before making them, as will be so for all folk making decisions on all aspects of their life during a routine day. What will I have for lunch now I am finished writing this and am now tired and in need of a break and sustenance? Perhaps I will leave off that decision and relax for a period of time before making lunch-related choices, so as not to make a fatigue-induced bad decision, and reach for that sausage roll, which still is in the fridge. And I need to get going and enact that decision I made to get the milk, and head off to the shops in order to do so as soon as possible, before lethargy set in and I change my mind, otherwise I will surely be in the ‘dog box’ at home later this afternoon, and my sense of cathartic peace resulting from having made these decisions will be even more ephemeral than usual!


Strategy, Tactics And Objectives – In The Words Of The Generals, You Can’t Bake A Cake Without Breaking A Few Eggs

I have always enjoyed reading history, and particularly military history, both as a hobby and as a way of learning from the past in order to better understand the currents and tides of political and social life that ‘batter one’ during one’s three score and ten years on earth, no matter how much one tries to avoid them. Compared to folk who lived in the first half of the twentieth century, I perceive that we have lived our contemporary lives in an environment that is relatively peaceful from the context that there has been no world-war or major conflict for the last 70 or 80 years, though the world-wide political fluxes recently, particularly in the USA and Europe / UK, are worrying, as is the rising nationalism, divisive ‘single choice’ politics, intolerance of minorities, and increasing number of refugees searching for better lives, all eerily reminiscent of what occurred in the decade before the American Civil War and both World Wars. I recently read (or actually re-read – a particularly odd trait of mine is that I often read books a dozen or more times if I find something in them important or compelling from a learning perspective) a book on the Western Allies European military strategy in the second World War, and of the disagreements that occurred between the United States General (and later President) Dwight Eisenhower and British General Bernard Montgomery over strategy and tactics used during the campaign, and how this conflict damaged relations between the military leaders of the two countries almost irreparably. I also re-read two autobiographies of soldiers involved in the war, the first by Major Dick Winters, who was in charge of a Company (Easy Company) of soldiers in the 506th Parachute Infantry Regiment of the 101st USA Airborne Division, and the second an (apparently) autobiographical book written by Guy Sajer (if that was indeed his name), a soldier in the German Werhmacht, about his personal experiences first as a lorry driver, then as a soldier on the Eastern front in the GrossDeutschland Division, and was struck by how different both the two books were in content compared to the one on higher European military strategy, and also how different the experiences were between Generals and foot soldiers, even though they were all involved in the same conflict. All this got me thinking of objectives, strategy and tactics, and how they are set, and how they impact on the folk that have to carry them out.

Both strategy and tactics are developed in order to achieve a particular objective (also known as a goal). An objective is defined as a desired result that a person or system envisions, plans, and commits to achieve. The leaders of most organizations, whether they are military, political, academic or social set out a number of objectives they would like to achieve, for the greater good of the organization they lead (though it is never acknowledged, of course, that they – the leaders – will get credit or glory for achieving the objective, and that this is often an ‘underlying’ objective in itself). In order to achieve an objective, a leader, or group of leaders, set a particular strategy in order to do so. There are a number of different definitions of strategy, including it being a ‘high level’ plan to achieve an objective under conditions of uncertainty, or making decisions about how to best use resources available in the presence of often conflicting options, requirements and challenges in order to achieve a particular objective. The concept underpinning strategic planning is to set a plan / course of action that is believed that will be best suited to achieve the objective, and stick to that plan until the objective is achieved. If conditions change in a way that makes sticking to the strategy difficult, then tactics are used to compensate and adjust to the conditions while ‘maintaining’ the overall strategic plan. Tactics as a concept are often confused with strategy – but are in effect the means and methods of how a strategy is implemented, adhered to, and maintained, and can be altered in order to maintain the chosen strategy.

What is strategy and what are tactics becomes challenging when there are different ‘levels’ of command in an organization, with lower levels having more specific objectives which are individually required in order to achieve the over-arching objective, but which require the creation of specific ‘lower-level’ strategy, in order to reach the specific objective being set, even if the objective is a component of a higher level strategic plan. From the viewpoint of the planners that create the high-level / general objective strategy, the lower level plans / specific objectives would be tactics. From the viewpoint of the planners that set the lower-level strategy needed to complete a specific component of the general strategy, their ‘lower level’ plans would be (to them) strategy rather than tactics, with tactics being set at even lower levels in their specific area of command / management, which in turn could set up a further ‘debate’ about what is strategy and what is tactics at these even ‘lower’ level of command. Even the poor foot soldier, who is a ‘doer’ rather than a ‘planner’ of any strategic plan or tactical action enacted as part of any higher level of command, would have their own objectives beyond those of the ‘greater plan’, most likely that of staying alive, and would have his or her own strategic plan to both fulfil the orders given to them, but stay alive, and tactics of how to do so. So in any organization, there are multiple levels of planning and objective setting, and what is strategy and what is tactics often becomes confused (and often commanders at lower level of command find orders given to them inexplicable, as they don’t have awareness of how their particular orders fit into the ‘greater strategic plan’), and this requires constant management by those at each level of command.

It is perhaps not being clear about what the specific objectives behind the creation of a particular strategy are which causes most command conflict, and is what happened in the later stages of the second World War as one of the main causes of the deterioration of the relationship between Dwight Eisenhower and Bernard Montgomery. The objective of the Allies in Western Europe was relatively simple – enter Europe and defeat Germany (though of course the war was mostly won and lost on the Eastern front due to Russian sacrifice and German strategic confusion) – but it was the strategy of how this was to happen which led to the inter-ally conflict, of which so much has been written. Eisenhower was the supreme Allied Commander, and responsible for all the Allied troops in Western Europe, and for setting the highest level of strategic planning. He decided on a ‘broad front’ strategy, where different Army Groups advanced eastwards across Europe after the breakout from Normandy, in a line from the northern coast of Europe to the southern coastline of Mediterranean Europe. Montgomery was originally the commander of all Allied ground troops in Europe, then after the Normandy breakout became commander of the 21st Army group, which was predominantly made up of British and Commonwealth troops (but also containing a large contingent of American troops), and he favoured a single, ‘sharp’ method of attacking one specific region of the front (of course choosing an area for attack in his own region of command). Montgomery’s doctrine was that which most strategic manuals would favour, and Eisenhower was sharply criticized by military leaders both during and after the war for going against the accepted strategic ‘thinking’ of that time. But Eisenhower of course had not just military objectives to think about, and had also political requirements too, and had to maintain harmony between not just American and British troops and nations, but also a number of Commonwealth countries troops and national requirements. If he had chosen one specific ‘single thrust’ strategy, as Montgomery demanded, he would have had to choose either a British dominated or American dominated attack, led by either a specific British or American commander, and neither country would have ‘tolerated’ such ‘favouritism’ on his part, and this issue was surely a large factor when he decided on a ‘broad front’ strategy. There was clearly military strategic thinking on his part too – ‘single thrust’ strategies can be rapidly ‘beaten back’ / ‘pinched off’ if performed against a still-strong military opposition, as was the case when Montgomery chose to attack on a very narrow line to Arnhem, and this was more than a ‘bridge too far’ – the German troops simply shut off the ‘corridor’ of advance behind the lead troops and the Allies were forced to withdraw in what was a tactical defeat for them. Montgomery criticized Eisenhower’s ‘broad front’ as leading to, or allowing, the ‘Battle of the Bulge’ to occur, when the German armies in late 1944 counter-attacked through the Belgium Ardennes region towards Antwerp, and caused a ‘reverse bulge’ in the Allied ‘broad front’ line, but in effect the rapidity with which the Allies closed down and defeated this last German ‘counter-thrust’ paradoxically provided evidence against the benefits of Montgomery’s ‘single thrust’ strategy, even though he used the German Ardennes offensive to condemn Eisenhower’s ‘broad front’ strategy. Perhaps Eisenhower should have been more clear about the political nature of his objectives and the political requirements of his planning, but then he would have been criticized for allowing political factors to ‘cloud’ what should have been purely military decisions (at least by his critics), so like many leaders setting ‘high level’ strategy, he was ‘doomed’ to be criticized whatever his strategic planning was, even if the ‘proof was in the pudding’ – his chosen strategy did win the war, and did so in less than a year after it was initiated, after the Allies had been at war for more than five years before the invasion of Western Europe was planned and initiated.

Whatever the ‘high level’ strategic decision made by the Generals, the situation ‘on the ground’ for Company leaders and foot soldiers who had to enact these strategies was very different, as was well described in the books by Dick Winters (the book became a highly praised TV series – Band of Brothers) and Guy Sajer. Most of the individual company level actions in which Easy company participated in bordered on the shambolic – from the first parachute drop into enemy held France where most of the troops were scattered so widely that they fought mainly skirmishes in small units, to operations supporting Montgomery’s ‘thrust’ to Arnhem which were a tactical failure and resulted in them withdrawing in defeat, to the battle of Bastogne which was a key component of the battle of the ‘Bulge’, where they just avoided defeat and sustained heavy casualties, and only just managed to ‘hold on’ until reinforcements arrived. A large number of their operations described were therefore not tactically successful, yet played their part in a grand strategy which lead to ultimate success. The impact of the ‘grand strategy’ on individual soldiers was horrifyingly (but beautifully from a writing perspective) described and a must read for any prospective military history ‘buffs’ in Guy Sajer’s autobiography – most of his time was spend marching in bitter cold or thick mud from one area of the Eastern front to another as his Division was required to stem yet another Russian breakthrough, or trying to find food with no formal rations being brought up to them as the Werhmacht operational management collapsed in the last phases of the war, or watching his friends being killed one by one in horrific ways as the Russian army grew more successful and more aggressive in their desire for both revenge and military success. There was no obvious pattern or strategy to what they were doing at the foot soldier level, there were no military objectives that could be made sense of at the individual level he described, rather there was only the ‘brute will to survive’, and to kill or be killed, and only near the end, did he (and his company level leaders) realize that they were actually losing the war, and their defeat would mean the annihilation of Germany and everything they were fighting for ‘back home’. Yet is was surely the individual actions of soldiers in their thousands and millions that endured and died for either side, that in a gestalt way lead to the strategic success (or failure) planned for by their leaders and generals, even if at their individual level they could make little sense of the benefit of their sacrifice in the context of the broader tactical and strategic requirements, in the times when they could reflect on this, though surely most of their own thoughts were on surviving anther terrible day, or another terrible battle, rather than on its ‘meaning’ or relevance.

One of the quotes that I have read in military history texts that has caused me to reflect most about war and strategy as an ‘amateur’ military history enthusiast is attributed to British World War Two Air Marshal Peter Portal, who when discussing some what he believed to be defective strategic planning with his colleague and army equal Field Marshal Alan Brooke, apparently suggested that ‘one cannot make a cake without breaking some eggs’. What he was saying, if I understood it, and the comment indeed can be attributed to him, was that in order for a military strategy to be successful, some (actually most of the time probably many) individual solders have to be sacrificed and die for the ‘greater good’ which would be a successfully achieved objective. From a strategic point of view he was surely correct, and often Generals who don’t take risks and worry too much about their soldiers safety can paradoxically often cause more harm than good by developing an overly cautious strategy which has an increased risk of failure and therefore an increased risk of more soldiers dying. But from a human point of view the comment is surely chilling, as each soldier’s individual death, often in brutal conditions, is horrific both to those that it happens to and those relatives, friends and colleagues that survive them. Often, or perhaps most of the time, individual soldiers die without any real understanding of the strategic purpose behind their death, and with a wish just to be with their loved ones again, and to be far from the environment and actions which cause their death. The folk at senior leadership levels setting grand strategy require a high degree of moral courage to ‘see it through’ to the end, knowing that their strategy will surely lead to a number of individual deaths. The folk who enact the grand strategy ‘in the trenches’ need a high degree of physical courage to perform the required actions to do so in conditions of grave danger, that as a small part of the ‘big picture’ may help lead to strategic success and attainment of the set objectives, usually winning in a war sense. But every side has its winners and its losers, and there is usually little difference between these for the foot soldier or Company leader, who dies in either a winning or losing cause, with little knowledge of how their death has contributed in any way to either winning or losing a battle, or campaign, or war.

Without objectives, strategy and tactics, there would never be any successful outcome to any war, and a lot of soldiers would die. With objectives, tactics and strategy, there is a greater chance of a successful outcome to any war, but a lot of soldier will still surely die. The victory cake tastes wonderful always, but always, sadly, to make such a ‘winners’ cake, many eggs do indeed need to be broken. It will long be controversial which is more important in the creation of the cake, the recipe or the eggs that make it up. Similarly, it will long be controversial whether it is relevant whether a ‘broad front’ or ‘single thrust’ strategy was the correct strategic or tactical approach to winning the war in Western Europe. But, the foot solder would surely not care whether his or her death was in the cause of tactical or strategic requirements, or happened during a ‘broad front’ or ‘single thrust’ strategy, when he or she is long dead and long forgotten, and historians are debating which General deserves credit for planning the strategy, or lack of it, that caused their death. That’s something I will ponder on as I reach for my next book on war strategy that fill the book shelf next to my writing desk, and hope that my children will never be in the position of having to be either the creators, or enactors, of military strategy, tactics and objectives.


The Collective Unconscious And Synchronicity – Are We All Created, Held Together And United As One By Mystic Bonds Emanating From The Psyche

Earlier this week I thought of an old friend and work colleague I had not been in contact with for many years, Professor Patrick Neary, who works and lives in Canada, and a few hours later an email arrived from him with all his news and recent life history detailed in it and in which he said he had thought of me this week and wondered what I was up to. Yesterday in preparation for writing this article, I was reading up and battling to understand the concept of the psychological ‘Shadow’, one of Carl Jung’s fascinating theories, and noticed a few hours later that Angie Vorster, a brilliant Psychologist we recently employed as a staff member in our Medical School to assist struggling students, posted an article on the ‘Shadow’ in her Facebook support page for Medical Students. Occasionally when I am standing in a room filled with folk, I feel ‘energy’ from someone I can’t see, and turn around and a person is staring at me. Watching a video last night, in a scene about religious fervour, all the folk in a church were seen raising their hands in the air to celebrate their Lord. Earlier that afternoon I couldn’t help noticing that a whole stadium of people watching a rugby game raised their hands in the air, in the same way as those did in the church, to celebrate when their team scored the winning try. Sadly, perhaps because I read too much existentialism related text when I was young, I don’t have any capacity to believe in a God or a religion, but on a windy day, when I am near a river or the ocean, I can’t help raising my hands to the sky and looking upwards, acknowledging almost unconsciously some deity or creative force that perhaps created the magical world we inhabit for three score years and ten. All of these got me thinking of Carl Jung, perhaps one of my favourite academic Psychologists and historical scientific figures, and his fascinating theories of the collective unconscious and synchronicity, which were his attempts to explain his belief that we all have similar psychological building blocks that are inter-connected and possibly a united ‘one’ at some deep or currently not understood level of life.

Carl Jung lived and produced his major creative work in the first few decades of the 20th century, in what some folk call the golden era of Psychology, where he and colleagues Sigmund Freud, Alfred Adler, Stanley Hall, Sandor Ferenczi and many others changed both our understanding of how the mind works and our understanding of the world itself. He was influenced by, and for a period was, a protégé of Sigmund Freud, until they fell out when Jung began distancing himself from Freud’s tunnel vision view that the entire unconscious and all psychological pathology had an underlying sexual focus and origin. He acknowledged Freud’s contribution of describing and delineating the unconscious as an entity, but thought that the unconscious was a ‘process’ where a number of lusts, instincts, desires and future wishes ‘battled’ with rational understanding and logical ‘thoughts’, all which occurred at a ‘level’ beyond that perceived by our conscious mind. He went further though, and after a number of travels to India, Africa and other continents and countries, where he did field studies of (so-called) ‘primitive’ tribes, he postulated that all folk had what he called a collective unconscious, which contained a person’s primordial beliefs, thought structures, and perceptual boundary creating ‘archetypes’ which were all universal, inherent (as they occurred in tribes and people which had not interacted together for thousands of years due to geographical constraints), and responsible for creating and maintaining both one’s world view and personality.

To understand Jung’s theory of the collective unconscious and its underpinning archetypes, one has to understand a debate that has not been successfully ‘settled’ since the time of Aristotle and Plato. Aristotle (and other folk who became known later as the empiricists) believed that all that can be known or occur is a product of experience and life lived. In this world view, the idea of the ‘Tabula rasa’ (blank slate) predominates, which suggests that all individuals are born without ‘built-in’ mental ‘knowledge’ and therefore that all knowledge needs to be developed by experience and perceptual processes which ‘observes’ life and makes sense of it. Plato (and other folk who became known as Platonists, or alternately rationalists) believed that ‘universals’ exist and occur which are independent of human life processes, and which are ‘present’ in our brain and mental structures from the time we were born and that these universals ‘give us’ our understanding of life and how ‘it’ works. For example, Plato used the example of a horse – there are many different types, sizes and colours of horses, but we all understand the ‘concept’ of a horse, and this ‘concept’ in Plato’s opinion was ‘free-standing’ and exists as a ‘universal’ or ‘template’ which ‘pre-figures’ the existence of the actual horse itself (obviously religion and the idea that we are created by some deity according to his plan for us would fall into the platonic ‘camp’ / way of thinking). This argument about whether ‘universals’ exist or whether we are ‘nothing’ / a Tabula rasa without developed empirical experience has never been completely resolved, and it is perhaps unlikely that it will ever be unless we have a great development of the capacity or structures of our mental processes and function.

Jung took the Platonist view, and believed that at a very deep level of the unconscious there were primordial, or ‘archetypical’ psychological universals that existed, which have been defined as innate, universal prototypes for all ‘ideas’ which may be used to interpret observations. Similar to the idea that one’s body is created based on a template ‘stored’ in one’s DNA, in his collective unconscious theory the archetypes were the psychological equivalents of DNA (though of course DNA was discovered many years after Jung wrote about the collective unconscious and synchronicity) and the template from which all ideas and concepts developed, and which are the frame of reference of how all occurrences in the world around one are interpreted. Some archetypes that he (and others) gave names to were the mother figure, the wise old man figure, the hero figure, the ego and shadow (one’s positive and negative ‘sense of self’) and the anima and animus (the ‘other’ gender component of one’s personality) archetypes, amongst others. He thought that these were the ‘primordial images’ which both filtered and in many ways created ones ‘world view’ and governed how one reacted to life. For example, if one believed that one’s own personality was that of a ‘hero’ figure’, and ‘chooses it’ as one’s principle archetype, one would respond to life accordingly, and constant try to solve challenges in a heroic way. In contrast, if one based one’s sense of self on a ‘wise old man’ (perhaps to be gender indiscriminate it should have been described as a ‘wise old person’) archetype, one would respond to life and perceived ‘challenges’ in a wise ‘old man way’ rather than a ‘heroic’ figure way. How he came to develop these specific archetypes was by examining the religious symbols and motifs used across different geographically separated tribes and communities, and found that there were these similar ‘images’, or ‘archetypes’ as he called them, that occurred across these diverse groups of folk and were revered by them as images of worship and / or as personality types to be deified. Jung suggested that from these ‘basic’ archetypes an individual could create their own particular archetypes as they developed, or one’s ‘self’ could be a combination of several of them – but also that there were specific archetypes that resided in each individual and were similar across all living individuals and these were conservatively maintained across generations as ‘universals’.

Jung went even further in exploring the ‘oneness’ of all folk with his theory of synchronicity, which suggested that events that occur are ‘meaningful coincidences’ if they occur with no (apparent) causal relationship, but appear to be ‘meaningfully related’. He was always somewhat vague about exactly what he meant by synchronicity. In the ‘light’ version he suggested that the archetypes which are the same in all people allow us all to ‘be’ (or at least think) similarly. In the ‘extreme’ version of this theory (which was also called ‘Unus mundus’, which is Latin for ‘one world’) it is suggested that we all belong to an ‘underlying unified reality’, and are essentially ‘one’, with our archetypes allowing our individual ‘reality’ to emerge as perceptually different to other folk and unique to us, but this archetype generated reality is illusory and ‘filtered’, and comes from the same ‘Unus mundus’ in which and of which we all exist, and to which we all eventually return. He based this observation on similar events to those that which I described above as happening to me, where friends contacted him when he was thinking of them, and when events happened to different folk geographically separate that were so similar that to him the laws of chance and statistical probability could not explain them away. While these theories may appear to be somewhat ‘wild’ in their breadth of vision, it is notable that Physics as a discipline explores this very concept of ‘action at a distance’ as ‘nonlocality’ theories, which are defined as the concept that an object can be moved, changed, or otherwise affected without being physically touched by another object. The theories of relativity and quantum mechanics, whether one believes them or not, are underpinned by these concepts, which similarly, as described above, underpin Jung’s theory of synchronicity.

It is very difficult to either prove or refute Jung’s theories of the collective unconscious, archetypes, and synchronicity, and they have therefore often been given ‘short thrift’ by the contemporary scientific community. But Jung is not to blame that even today our neuroscience and brain and mental monitoring devices are so primitive that they have not helped us at all understand either basic brain function or how the rich mosaic of everyone’s own private mental life occurs and is maintained, and he would say it is the fact that we each ‘choose’ different archetypes for our own identity and as a filter of life that makes it ‘feel’ to us as if we are isolated individuals living a discrete and ‘detached’ life, and perceive that our life is ‘different’ to all others. It has also been suggested that the reason why we have similar beliefs and make people out to be heroes, or wise men, or mother figures, in our life, is not because of archetypes but rather because we have similar experiences and respond to our environment and the symbolism that is ‘seen’ during our daily life, is evident in churches and religious groups, in politics and group management activities, and in advertising (marketers have made great use of archetypes to influence our choices by how they create adverts since Jung suggested these concepts – think of the use of snake and apple motifs, apart from the kind mother or heroic father archetypes which are so often used in adverts) on a continuous basis. Jung would answer in a chicken and egg way, and ask where did all these symbols, motifs and group responses originate from if they were not created or developed from something deep inside us / our psyche? His theory of synchronicity has also been criticized by some as being confused with pure chance and probability, or as an example of a confirmation bias in folk (a tendency to search and interpret new information in a way that confirms one’s preconceptions), and the term apophenia has been developed to describe the mistaken detection of meaning in random or meaningless data. But how then does one explain my friend writing to me this week when I was thinking about him a day or two before his email arrived, or how when I am battling with to understand a psychological concept the psychologist I work with posts an explanation of exactly what I am battling with (even if I have never told her I am working on understanding these concepts this week) on Facebook, or how the ‘feeling’ that one has that someone is watching one occurs, and when turning around one finds that they are indeed watching you. These may indeed be chance, and I may be suffering from ‘apophenia’, but the opposite may also be true.

I have been a scientist and academic for nearly thirty years now, and have developed a healthy scepticism and ‘nonsense-ometer’ for most theories and suggestions which seem outrageous and difficult to prove with rigorous scientific measurements (or the lack of them). But there is something in Carl Jung’s theories of the collective unconscious, archetypes and synchronicity that strike a deep chord in me and my ‘gut feel’ is that they are right, even though with our contemporary scientific measuring devices there is no way they can be either surely proved or disproved. Perhaps this is because I want to and enjoy ‘connecting’ with folk and is caused by some inherent psychological need or weakness in my psyche (or because I have chosen the wrong ‘archetype’ / my current sense of self does not ‘fit’ the life I have chosen and this creates a dissonance that makes me want to believe that Jung was right – how’s that for some real ‘psychobabble’!). But this morning my wonderful daughter, Helen (age 8), gave me a card she had made at school after all the girls in her class had been given a card template to colour in, and the general motif / image on the card (and I assume on all the printed cards) was that of a superman – it’s difficult not to believe that a chosen ‘hero’ motif does not provide evidence for an archetype when such is chosen by a school-teacher as what kids should use to describe their father (though surely myself like most dads are not deserving of such a description). This afternoon I will take the kids and dogs for a walk around the dam around where I live, and will very likely raise my hands to the water and wind and sky around me when I do so, as much as it is likely that the folk who will be going to church at the same time will be raising their hands to their chosen God, and those going to watch their team’s football match this afternoon will raise their hands to the sky when their team scores – all doing what surely generations of our ancestors did in the time before now. While we all appear to act so differently during out routine daily life, there is always a similar response amongst most folk (excluding psychopaths, but that is for another article / another day) to real tragedy, or real crises, or real good news, when it occurs, and so often folk will admit if pushed to that they appeal either to a ‘hero’ figure to protect or save them in time of danger, or a ‘mother’ figure to help ‘heal their pain’ after tragedy occurs, and these calls for help’ / succour are surely archetype related (and indeed it has been suggested that the image of God has been created as a ‘hero’ or ‘father’ figure out of an archetype by religious folk – though equally religious folk would say if there are archetypes, they may have been created in their God’s image).

Our chosen archetypes creates a filter and a prism through which life and folks behaviour might appear different, and indeed may be different, but at the level of the hypothesized ‘collective unconscious’, in all of us, there is surely similarity, and perhaps, just perhaps, as Jung suggests, we are all ‘one’, or at least that mystic bonds are indeed connecting us at some deep level of the psyche or at some energy level we currently don’t understand and can’t measure. How these occur or were generated as ‘universals’ as per the thinking of Jung and Plato, is perhaps for another day, or perhaps another generation, to explain. Unus mundus or Tabula rasa? Collective unconscious or unique individual identity? Mystic connecting bonds or splendid isolation? I’ll ponder on these issues as I push the ‘publish’ button, and send this out to all of you, in the hope that it ‘synchronises’ in some way with at least some of you that read it, though of course via Jung’s ‘mystic bonds’ you may already be aware of all I have written!


%d bloggers like this: