Category Archives: History

Plato’s Horse And The Concept of Universals – Can You Have Life As We Know It Without Rules That Govern It

My young daughter’s precious Labrador puppy, Violet, has grown up, and recently turned two. As a family, we usually traditionally have Schnauzers as pets, and it’s been strange but nice to have a different breed around the home, and Labradors have great personalities. What struck me forcibly when Violet came in to our life, was that while she has a very different shape and form to our two Schnauzers, she is as instantly recognisable as a dog, as all our dogs are. It again struck me when the family watched a program on the most loved one hundred dog breeds in the UK (for those interested Labradors came in first), that while each of the different breeds had very different characteristics – think Chihuahua as compared to a Great Dane, or a Pug compared to a German Shepherd – they are were all instantly recognisable by our family members watching it, and I am sure just about all the folk who watched the program, as being dogs rather than cats, or lamas, or sheep. In the last few years of my academic career (and perhaps at a subconscious level for my entire research career), after having been an Integrative Control Systems scientist for most of my career, trying to understand how our different body systems and functions are controlled and maintained in an optimal and safe manner, I have come to understand, and have been exploring the concept, along with great collaborators Dr Jeroen Swart and Professor Ross Tucker, that perhaps general rules are operating across all body system control mechanisms, whatever their shape or form, and we recently published a theoretical research paper which described our thoughts. In my new role as Deputy Dean of Research at the University of Essex, I am fortunate to be working with the Department of Mathematics, helping them enhance their research from an organisational perspective, and it has been fascinating working with these supremely bright folk and seeing the work they do, and having it reiterated to me that even simple mathematical principles are abstract, and not grounded in anything in the physical world (for example knowing that 1 plus 2 equals 3 does not need any physical activity for it to be always true). All of these recent activities have got me thinking of the long pondered issue of universals, their relationship to rules and regulation governing and maintaining life, and which came first, the rules, or the physical activity that requires rules and regulation to be maintained in order for the physical activity to continue and be both organised and productive.

Universals are defined as a class of mind-independent (indeed human-independent) entities which are usually contrasted with individuals (also known as ‘particulars’ relative to ‘universals’), and which are believed to ‘ground’ and explain the relation of qualitative identity and resemblance among all individuals. More simply, they are defined as the nature or essence of a group of like objects described by a general term. For example, in the case of dogs described above, when we see a dog, whether it is a Labrador, Schnauzer, German Shepherd, or a myriad of other breeds, we ‘know’ it to be a dog, and the word dog is used to cover the ‘essence’ of all these and other breeds. Similarly, we know what a cat is, or a house, or shoes, despite each of these ‘types of things’ often looking very different to each other – there are clearly enough characteristics in each to define them by a universal defining name. Understanding universals gets even more complex though than merely thinking of them as being just a name or group of properties for a species or ‘type of thing’. Long ago, back in the time of antiquity, one of the first recorded philosophical debates was about universals, and whether they existed independently as abstract entities, or only as a term to define an object, species or ‘type of thing’. Plato suggested that a universal exists independent of that which they define, and are the true ‘things which we know’, even if they are intangible and immeasurable, with the living examples of them being copies and / or imitations of the universal, each varying slightly from the original universal, but bound in their form by the properties defined by the universal. In other words, he suggested that universals are the ‘maps’ of structures or forms which exist as we see and know them, for example a dog, or a horse, or a tree, and they exist in an intangible state somehow in the ‘ether’ around us, ‘directing’ the creation of the physical entities in some way which we have not determined or are currently capable of understanding.

In philosophical terms, this theory of universals as independent entities is known as Platonic Realism. After Plato came Aristotle, who felt that universals are ‘real entities’, like Plato perceived them to be, but in his theory (known as Aristotelian Realism), he suggested that universals did not exist independent of the particulars, or species, or ‘things’ they defined, and were linked to their physical existence, and would not exist without the physical entities they ‘represent’. In contrast to realism, Nominalism is a theory that developed after the work of these two geniuses (Nominalists are also sometimes described as Empiricists) which denied the existence of universals completely, and suggested that physical ‘things’, or particulars, shared only a name, and not qualities that defined them, and that universals were not necessary for the existence of species or ‘things’. Idealism (proposed by folk like Immanuel Kant) got around the problem of universals by suggesting that universals were not real, but rather were products of the minds of ‘rational individuals’, when thinking of that which they were looking at.

This dilemma of both the existence and nature of universals has to date not been solved, or adequately explained, given that it is impossible with current scientific techniques, or perhaps psychological ‘power’ in our minds, to be able to prove or disprove the presence of universals, and folk ‘believe’ in one of these different choices of universals depending on their world and life points of view. Religious folk would suggest that the world is created in God’s ‘image’, and to them God’s ‘images’ would be the universals from which all ‘God’s creatures’ are created. In contrast, with respect to evolution, which is diametrically opposed to the concept of religion, it is difficult to believe in both evolution and the presence of universals, as evolution is based on the concept of need and error-driven individual genetic changes over millennia in response to that need, which led to different species developing, and to the variety in nature and life we see all around us. In the evolutionary model therefore, the concept of universals (and the creation of the world by a God as posited by many religions) would appear to be counter-intuitive.

While a lot of debate has focused on ‘things one can see’ as the physical ‘particulars’ which are either a product of universals or not, there are more abstract activities which support the existence of universals independent of the mind or ‘things that they are involved with’. For example, the work done by Ross, Jeroen and myself developed from the realisation that a core principle of all physiological activity is homeostasis, which is defined as the maintenance by regulatory control processes or structures of physiological or physical activity within certain tolerable limits, in order to protect the individual or thing being regulated from being damaged, or damaging itself. Underpinning all homeostatic control mechanisms is the negative feedback loop, where when a substance or activity increases or decreases too much, initiates other activity as part of a circular control structure which has the capacity to act on the substance requiring control, and normalises or attenuates the changes, and keeps the activity or behaviour within required ‘safe levels’, which are set by homeostatic control mechanisms. The fascinating thing is that the same principle of negative feedback control loops occurs in all and any physical living system, and without it life could not occur. Whether gene activity, liver function, or whole body activity, all which have very different physical or metabolic regulatory structures and processes, all are controlled by negative feedback loop principles. Therefore, it is difficult not to perceive that the negative feedback loop is a type of universal, but one that works by similar ‘action’ across systems rather than ‘creating’ a physical thing in its likeness. Mathematics is another area in which folk believe universals are ‘at work’, given that even the simplest sums, such as one plus two equals three, needs no physical structure or ‘particular’ for them to always be such, and true. While we all use mathematical principles on a continuous basis, it is difficult to believe that such mathematical principles do not ‘exist’ in the absence of humans, or any physical shape or forms.

So where does all this leave us in understanding universals and their relevance to life as we know it? Perhaps what one’s viewpoint is regarding the existence of universals depends on one’s own particular epistemological perspective (understanding of the nature of knowledge and how it is related to one’s justified beliefs) and world view. Though I can in no way prove it, I believe in universals and would define myself as a Platonic Realist. This viewpoint comes from a career in science and working with exceptional scientists like Jeroen Swart and Ross Tucker getting to understand the exquisite and universal nature of control mechanisms which keep our bodies working the way which they do. However, I do not believe in any God or religion in any shape or form, and have greater faith in the evolutionary model, which is counter-intuitive relative to my belief in the presence of independent universals. Therefore, the potential similarities and differences between religion and universals, and evolution and universals described above is clearly redundant for my specific beliefs, and there is probably similar confusion in core beliefs for many (particularly research involved) folk. However, it is exciting to think (at least for me) that there may be universals out there that have no link to current activities or functions or species, and which may become evident to humans at some point in the future, by way of the development of new species or new ‘things’. Having said that, I guess it could be argued that if universals do not exist, progress and the evolution of ideas will lead us to new developments, species or ways of life in an evolution-driven, error-associated way. One cannot ‘see’ or ‘feel’ a negative feedback loop, or a maths algorithm, or universal for even something as simple as a dog, which is why perhaps to a lot of folk with a different epistemological viewpoint to mine it is challenging to accept the presence, or indeed the necessity, of and for universals. But when I look at our Labrador, and ‘know’ as such it is a dog as much as a Schnauzer, Chihuahua, or German Shepherd is, I feel sure there is the Universal dog out there somewhere in the ether that will perhaps keep my toes warm when I leave this world for the great wide void which may exist beyond it. And surely, given what a stunning breed they are, the Universal dog, if it exists out there, can only be a Labrador!

Advertisements

Strategy, Tactics And Objectives – In The Words Of The Generals, You Can’t Bake A Cake Without Breaking A Few Eggs

I have always enjoyed reading history, and particularly military history, both as a hobby and as a way of learning from the past in order to better understand the currents and tides of political and social life that ‘batter one’ during one’s three score and ten years on earth, no matter how much one tries to avoid them. Compared to folk who lived in the first half of the twentieth century, I perceive that we have lived our contemporary lives in an environment that is relatively peaceful from the context that there has been no world-war or major conflict for the last 70 or 80 years, though the world-wide political fluxes recently, particularly in the USA and Europe / UK, are worrying, as is the rising nationalism, divisive ‘single choice’ politics, intolerance of minorities, and increasing number of refugees searching for better lives, all eerily reminiscent of what occurred in the decade before the American Civil War and both World Wars. I recently read (or actually re-read – a particularly odd trait of mine is that I often read books a dozen or more times if I find something in them important or compelling from a learning perspective) a book on the Western Allies European military strategy in the second World War, and of the disagreements that occurred between the United States General (and later President) Dwight Eisenhower and British General Bernard Montgomery over strategy and tactics used during the campaign, and how this conflict damaged relations between the military leaders of the two countries almost irreparably. I also re-read two autobiographies of soldiers involved in the war, the first by Major Dick Winters, who was in charge of a Company (Easy Company) of soldiers in the 506th Parachute Infantry Regiment of the 101st USA Airborne Division, and the second an (apparently) autobiographical book written by Guy Sajer (if that was indeed his name), a soldier in the German Werhmacht, about his personal experiences first as a lorry driver, then as a soldier on the Eastern front in the GrossDeutschland Division, and was struck by how different both the two books were in content compared to the one on higher European military strategy, and also how different the experiences were between Generals and foot soldiers, even though they were all involved in the same conflict. All this got me thinking of objectives, strategy and tactics, and how they are set, and how they impact on the folk that have to carry them out.

Both strategy and tactics are developed in order to achieve a particular objective (also known as a goal). An objective is defined as a desired result that a person or system envisions, plans, and commits to achieve. The leaders of most organizations, whether they are military, political, academic or social set out a number of objectives they would like to achieve, for the greater good of the organization they lead (though it is never acknowledged, of course, that they – the leaders – will get credit or glory for achieving the objective, and that this is often an ‘underlying’ objective in itself). In order to achieve an objective, a leader, or group of leaders, set a particular strategy in order to do so. There are a number of different definitions of strategy, including it being a ‘high level’ plan to achieve an objective under conditions of uncertainty, or making decisions about how to best use resources available in the presence of often conflicting options, requirements and challenges in order to achieve a particular objective. The concept underpinning strategic planning is to set a plan / course of action that is believed that will be best suited to achieve the objective, and stick to that plan until the objective is achieved. If conditions change in a way that makes sticking to the strategy difficult, then tactics are used to compensate and adjust to the conditions while ‘maintaining’ the overall strategic plan. Tactics as a concept are often confused with strategy – but are in effect the means and methods of how a strategy is implemented, adhered to, and maintained, and can be altered in order to maintain the chosen strategy.

What is strategy and what are tactics becomes challenging when there are different ‘levels’ of command in an organization, with lower levels having more specific objectives which are individually required in order to achieve the over-arching objective, but which require the creation of specific ‘lower-level’ strategy, in order to reach the specific objective being set, even if the objective is a component of a higher level strategic plan. From the viewpoint of the planners that create the high-level / general objective strategy, the lower level plans / specific objectives would be tactics. From the viewpoint of the planners that set the lower-level strategy needed to complete a specific component of the general strategy, their ‘lower level’ plans would be (to them) strategy rather than tactics, with tactics being set at even lower levels in their specific area of command / management, which in turn could set up a further ‘debate’ about what is strategy and what is tactics at these even ‘lower’ level of command. Even the poor foot soldier, who is a ‘doer’ rather than a ‘planner’ of any strategic plan or tactical action enacted as part of any higher level of command, would have their own objectives beyond those of the ‘greater plan’, most likely that of staying alive, and would have his or her own strategic plan to both fulfil the orders given to them, but stay alive, and tactics of how to do so. So in any organization, there are multiple levels of planning and objective setting, and what is strategy and what is tactics often becomes confused (and often commanders at lower level of command find orders given to them inexplicable, as they don’t have awareness of how their particular orders fit into the ‘greater strategic plan’), and this requires constant management by those at each level of command.

It is perhaps not being clear about what the specific objectives behind the creation of a particular strategy are which causes most command conflict, and is what happened in the later stages of the second World War as one of the main causes of the deterioration of the relationship between Dwight Eisenhower and Bernard Montgomery. The objective of the Allies in Western Europe was relatively simple – enter Europe and defeat Germany (though of course the war was mostly won and lost on the Eastern front due to Russian sacrifice and German strategic confusion) – but it was the strategy of how this was to happen which led to the inter-ally conflict, of which so much has been written. Eisenhower was the supreme Allied Commander, and responsible for all the Allied troops in Western Europe, and for setting the highest level of strategic planning. He decided on a ‘broad front’ strategy, where different Army Groups advanced eastwards across Europe after the breakout from Normandy, in a line from the northern coast of Europe to the southern coastline of Mediterranean Europe. Montgomery was originally the commander of all Allied ground troops in Europe, then after the Normandy breakout became commander of the 21st Army group, which was predominantly made up of British and Commonwealth troops (but also containing a large contingent of American troops), and he favoured a single, ‘sharp’ method of attacking one specific region of the front (of course choosing an area for attack in his own region of command). Montgomery’s doctrine was that which most strategic manuals would favour, and Eisenhower was sharply criticized by military leaders both during and after the war for going against the accepted strategic ‘thinking’ of that time. But Eisenhower of course had not just military objectives to think about, and had also political requirements too, and had to maintain harmony between not just American and British troops and nations, but also a number of Commonwealth countries troops and national requirements. If he had chosen one specific ‘single thrust’ strategy, as Montgomery demanded, he would have had to choose either a British dominated or American dominated attack, led by either a specific British or American commander, and neither country would have ‘tolerated’ such ‘favouritism’ on his part, and this issue was surely a large factor when he decided on a ‘broad front’ strategy. There was clearly military strategic thinking on his part too – ‘single thrust’ strategies can be rapidly ‘beaten back’ / ‘pinched off’ if performed against a still-strong military opposition, as was the case when Montgomery chose to attack on a very narrow line to Arnhem, and this was more than a ‘bridge too far’ – the German troops simply shut off the ‘corridor’ of advance behind the lead troops and the Allies were forced to withdraw in what was a tactical defeat for them. Montgomery criticized Eisenhower’s ‘broad front’ as leading to, or allowing, the ‘Battle of the Bulge’ to occur, when the German armies in late 1944 counter-attacked through the Belgium Ardennes region towards Antwerp, and caused a ‘reverse bulge’ in the Allied ‘broad front’ line, but in effect the rapidity with which the Allies closed down and defeated this last German ‘counter-thrust’ paradoxically provided evidence against the benefits of Montgomery’s ‘single thrust’ strategy, even though he used the German Ardennes offensive to condemn Eisenhower’s ‘broad front’ strategy. Perhaps Eisenhower should have been more clear about the political nature of his objectives and the political requirements of his planning, but then he would have been criticized for allowing political factors to ‘cloud’ what should have been purely military decisions (at least by his critics), so like many leaders setting ‘high level’ strategy, he was ‘doomed’ to be criticized whatever his strategic planning was, even if the ‘proof was in the pudding’ – his chosen strategy did win the war, and did so in less than a year after it was initiated, after the Allies had been at war for more than five years before the invasion of Western Europe was planned and initiated.

Whatever the ‘high level’ strategic decision made by the Generals, the situation ‘on the ground’ for Company leaders and foot soldiers who had to enact these strategies was very different, as was well described in the books by Dick Winters (the book became a highly praised TV series – Band of Brothers) and Guy Sajer. Most of the individual company level actions in which Easy company participated in bordered on the shambolic – from the first parachute drop into enemy held France where most of the troops were scattered so widely that they fought mainly skirmishes in small units, to operations supporting Montgomery’s ‘thrust’ to Arnhem which were a tactical failure and resulted in them withdrawing in defeat, to the battle of Bastogne which was a key component of the battle of the ‘Bulge’, where they just avoided defeat and sustained heavy casualties, and only just managed to ‘hold on’ until reinforcements arrived. A large number of their operations described were therefore not tactically successful, yet played their part in a grand strategy which lead to ultimate success. The impact of the ‘grand strategy’ on individual soldiers was horrifyingly (but beautifully from a writing perspective) described and a must read for any prospective military history ‘buffs’ in Guy Sajer’s autobiography – most of his time was spend marching in bitter cold or thick mud from one area of the Eastern front to another as his Division was required to stem yet another Russian breakthrough, or trying to find food with no formal rations being brought up to them as the Werhmacht operational management collapsed in the last phases of the war, or watching his friends being killed one by one in horrific ways as the Russian army grew more successful and more aggressive in their desire for both revenge and military success. There was no obvious pattern or strategy to what they were doing at the foot soldier level, there were no military objectives that could be made sense of at the individual level he described, rather there was only the ‘brute will to survive’, and to kill or be killed, and only near the end, did he (and his company level leaders) realize that they were actually losing the war, and their defeat would mean the annihilation of Germany and everything they were fighting for ‘back home’. Yet is was surely the individual actions of soldiers in their thousands and millions that endured and died for either side, that in a gestalt way lead to the strategic success (or failure) planned for by their leaders and generals, even if at their individual level they could make little sense of the benefit of their sacrifice in the context of the broader tactical and strategic requirements, in the times when they could reflect on this, though surely most of their own thoughts were on surviving anther terrible day, or another terrible battle, rather than on its ‘meaning’ or relevance.

One of the quotes that I have read in military history texts that has caused me to reflect most about war and strategy as an ‘amateur’ military history enthusiast is attributed to British World War Two Air Marshal Peter Portal, who when discussing some what he believed to be defective strategic planning with his colleague and army equal Field Marshal Alan Brooke, apparently suggested that ‘one cannot make a cake without breaking some eggs’. What he was saying, if I understood it, and the comment indeed can be attributed to him, was that in order for a military strategy to be successful, some (actually most of the time probably many) individual solders have to be sacrificed and die for the ‘greater good’ which would be a successfully achieved objective. From a strategic point of view he was surely correct, and often Generals who don’t take risks and worry too much about their soldiers safety can paradoxically often cause more harm than good by developing an overly cautious strategy which has an increased risk of failure and therefore an increased risk of more soldiers dying. But from a human point of view the comment is surely chilling, as each soldier’s individual death, often in brutal conditions, is horrific both to those that it happens to and those relatives, friends and colleagues that survive them. Often, or perhaps most of the time, individual soldiers die without any real understanding of the strategic purpose behind their death, and with a wish just to be with their loved ones again, and to be far from the environment and actions which cause their death. The folk at senior leadership levels setting grand strategy require a high degree of moral courage to ‘see it through’ to the end, knowing that their strategy will surely lead to a number of individual deaths. The folk who enact the grand strategy ‘in the trenches’ need a high degree of physical courage to perform the required actions to do so in conditions of grave danger, that as a small part of the ‘big picture’ may help lead to strategic success and attainment of the set objectives, usually winning in a war sense. But every side has its winners and its losers, and there is usually little difference between these for the foot soldier or Company leader, who dies in either a winning or losing cause, with little knowledge of how their death has contributed in any way to either winning or losing a battle, or campaign, or war.

Without objectives, strategy and tactics, there would never be any successful outcome to any war, and a lot of soldiers would die. With objectives, tactics and strategy, there is a greater chance of a successful outcome to any war, but a lot of soldier will still surely die. The victory cake tastes wonderful always, but always, sadly, to make such a ‘winners’ cake, many eggs do indeed need to be broken. It will long be controversial which is more important in the creation of the cake, the recipe or the eggs that make it up. Similarly, it will long be controversial whether it is relevant whether a ‘broad front’ or ‘single thrust’ strategy was the correct strategic or tactical approach to winning the war in Western Europe. But, the foot solder would surely not care whether his or her death was in the cause of tactical or strategic requirements, or happened during a ‘broad front’ or ‘single thrust’ strategy, when he or she is long dead and long forgotten, and historians are debating which General deserves credit for planning the strategy, or lack of it, that caused their death. That’s something I will ponder on as I reach for my next book on war strategy that fill the book shelf next to my writing desk, and hope that my children will never be in the position of having to be either the creators, or enactors, of military strategy, tactics and objectives.


The Collective Unconscious And Synchronicity – Are We All Created, Held Together And United As One By Mystic Bonds Emanating From The Psyche

Earlier this week I thought of an old friend and work colleague I had not been in contact with for many years, Professor Patrick Neary, who works and lives in Canada, and a few hours later an email arrived from him with all his news and recent life history detailed in it and in which he said he had thought of me this week and wondered what I was up to. Yesterday in preparation for writing this article, I was reading up and battling to understand the concept of the psychological ‘Shadow’, one of Carl Jung’s fascinating theories, and noticed a few hours later that Angie Vorster, a brilliant Psychologist we recently employed as a staff member in our Medical School to assist struggling students, posted an article on the ‘Shadow’ in her Facebook support page for Medical Students. Occasionally when I am standing in a room filled with folk, I feel ‘energy’ from someone I can’t see, and turn around and a person is staring at me. Watching a video last night, in a scene about religious fervour, all the folk in a church were seen raising their hands in the air to celebrate their Lord. Earlier that afternoon I couldn’t help noticing that a whole stadium of people watching a rugby game raised their hands in the air, in the same way as those did in the church, to celebrate when their team scored the winning try. Sadly, perhaps because I read too much existentialism related text when I was young, I don’t have any capacity to believe in a God or a religion, but on a windy day, when I am near a river or the ocean, I can’t help raising my hands to the sky and looking upwards, acknowledging almost unconsciously some deity or creative force that perhaps created the magical world we inhabit for three score years and ten. All of these got me thinking of Carl Jung, perhaps one of my favourite academic Psychologists and historical scientific figures, and his fascinating theories of the collective unconscious and synchronicity, which were his attempts to explain his belief that we all have similar psychological building blocks that are inter-connected and possibly a united ‘one’ at some deep or currently not understood level of life.

Carl Jung lived and produced his major creative work in the first few decades of the 20th century, in what some folk call the golden era of Psychology, where he and colleagues Sigmund Freud, Alfred Adler, Stanley Hall, Sandor Ferenczi and many others changed both our understanding of how the mind works and our understanding of the world itself. He was influenced by, and for a period was, a protégé of Sigmund Freud, until they fell out when Jung began distancing himself from Freud’s tunnel vision view that the entire unconscious and all psychological pathology had an underlying sexual focus and origin. He acknowledged Freud’s contribution of describing and delineating the unconscious as an entity, but thought that the unconscious was a ‘process’ where a number of lusts, instincts, desires and future wishes ‘battled’ with rational understanding and logical ‘thoughts’, all which occurred at a ‘level’ beyond that perceived by our conscious mind. He went further though, and after a number of travels to India, Africa and other continents and countries, where he did field studies of (so-called) ‘primitive’ tribes, he postulated that all folk had what he called a collective unconscious, which contained a person’s primordial beliefs, thought structures, and perceptual boundary creating ‘archetypes’ which were all universal, inherent (as they occurred in tribes and people which had not interacted together for thousands of years due to geographical constraints), and responsible for creating and maintaining both one’s world view and personality.

To understand Jung’s theory of the collective unconscious and its underpinning archetypes, one has to understand a debate that has not been successfully ‘settled’ since the time of Aristotle and Plato. Aristotle (and other folk who became known later as the empiricists) believed that all that can be known or occur is a product of experience and life lived. In this world view, the idea of the ‘Tabula rasa’ (blank slate) predominates, which suggests that all individuals are born without ‘built-in’ mental ‘knowledge’ and therefore that all knowledge needs to be developed by experience and perceptual processes which ‘observes’ life and makes sense of it. Plato (and other folk who became known as Platonists, or alternately rationalists) believed that ‘universals’ exist and occur which are independent of human life processes, and which are ‘present’ in our brain and mental structures from the time we were born and that these universals ‘give us’ our understanding of life and how ‘it’ works. For example, Plato used the example of a horse – there are many different types, sizes and colours of horses, but we all understand the ‘concept’ of a horse, and this ‘concept’ in Plato’s opinion was ‘free-standing’ and exists as a ‘universal’ or ‘template’ which ‘pre-figures’ the existence of the actual horse itself (obviously religion and the idea that we are created by some deity according to his plan for us would fall into the platonic ‘camp’ / way of thinking). This argument about whether ‘universals’ exist or whether we are ‘nothing’ / a Tabula rasa without developed empirical experience has never been completely resolved, and it is perhaps unlikely that it will ever be unless we have a great development of the capacity or structures of our mental processes and function.

Jung took the Platonist view, and believed that at a very deep level of the unconscious there were primordial, or ‘archetypical’ psychological universals that existed, which have been defined as innate, universal prototypes for all ‘ideas’ which may be used to interpret observations. Similar to the idea that one’s body is created based on a template ‘stored’ in one’s DNA, in his collective unconscious theory the archetypes were the psychological equivalents of DNA (though of course DNA was discovered many years after Jung wrote about the collective unconscious and synchronicity) and the template from which all ideas and concepts developed, and which are the frame of reference of how all occurrences in the world around one are interpreted. Some archetypes that he (and others) gave names to were the mother figure, the wise old man figure, the hero figure, the ego and shadow (one’s positive and negative ‘sense of self’) and the anima and animus (the ‘other’ gender component of one’s personality) archetypes, amongst others. He thought that these were the ‘primordial images’ which both filtered and in many ways created ones ‘world view’ and governed how one reacted to life. For example, if one believed that one’s own personality was that of a ‘hero’ figure’, and ‘chooses it’ as one’s principle archetype, one would respond to life accordingly, and constant try to solve challenges in a heroic way. In contrast, if one based one’s sense of self on a ‘wise old man’ (perhaps to be gender indiscriminate it should have been described as a ‘wise old person’) archetype, one would respond to life and perceived ‘challenges’ in a wise ‘old man way’ rather than a ‘heroic’ figure way. How he came to develop these specific archetypes was by examining the religious symbols and motifs used across different geographically separated tribes and communities, and found that there were these similar ‘images’, or ‘archetypes’ as he called them, that occurred across these diverse groups of folk and were revered by them as images of worship and / or as personality types to be deified. Jung suggested that from these ‘basic’ archetypes an individual could create their own particular archetypes as they developed, or one’s ‘self’ could be a combination of several of them – but also that there were specific archetypes that resided in each individual and were similar across all living individuals and these were conservatively maintained across generations as ‘universals’.

Jung went even further in exploring the ‘oneness’ of all folk with his theory of synchronicity, which suggested that events that occur are ‘meaningful coincidences’ if they occur with no (apparent) causal relationship, but appear to be ‘meaningfully related’. He was always somewhat vague about exactly what he meant by synchronicity. In the ‘light’ version he suggested that the archetypes which are the same in all people allow us all to ‘be’ (or at least think) similarly. In the ‘extreme’ version of this theory (which was also called ‘Unus mundus’, which is Latin for ‘one world’) it is suggested that we all belong to an ‘underlying unified reality’, and are essentially ‘one’, with our archetypes allowing our individual ‘reality’ to emerge as perceptually different to other folk and unique to us, but this archetype generated reality is illusory and ‘filtered’, and comes from the same ‘Unus mundus’ in which and of which we all exist, and to which we all eventually return. He based this observation on similar events to those that which I described above as happening to me, where friends contacted him when he was thinking of them, and when events happened to different folk geographically separate that were so similar that to him the laws of chance and statistical probability could not explain them away. While these theories may appear to be somewhat ‘wild’ in their breadth of vision, it is notable that Physics as a discipline explores this very concept of ‘action at a distance’ as ‘nonlocality’ theories, which are defined as the concept that an object can be moved, changed, or otherwise affected without being physically touched by another object. The theories of relativity and quantum mechanics, whether one believes them or not, are underpinned by these concepts, which similarly, as described above, underpin Jung’s theory of synchronicity.

It is very difficult to either prove or refute Jung’s theories of the collective unconscious, archetypes, and synchronicity, and they have therefore often been given ‘short thrift’ by the contemporary scientific community. But Jung is not to blame that even today our neuroscience and brain and mental monitoring devices are so primitive that they have not helped us at all understand either basic brain function or how the rich mosaic of everyone’s own private mental life occurs and is maintained, and he would say it is the fact that we each ‘choose’ different archetypes for our own identity and as a filter of life that makes it ‘feel’ to us as if we are isolated individuals living a discrete and ‘detached’ life, and perceive that our life is ‘different’ to all others. It has also been suggested that the reason why we have similar beliefs and make people out to be heroes, or wise men, or mother figures, in our life, is not because of archetypes but rather because we have similar experiences and respond to our environment and the symbolism that is ‘seen’ during our daily life, is evident in churches and religious groups, in politics and group management activities, and in advertising (marketers have made great use of archetypes to influence our choices by how they create adverts since Jung suggested these concepts – think of the use of snake and apple motifs, apart from the kind mother or heroic father archetypes which are so often used in adverts) on a continuous basis. Jung would answer in a chicken and egg way, and ask where did all these symbols, motifs and group responses originate from if they were not created or developed from something deep inside us / our psyche? His theory of synchronicity has also been criticized by some as being confused with pure chance and probability, or as an example of a confirmation bias in folk (a tendency to search and interpret new information in a way that confirms one’s preconceptions), and the term apophenia has been developed to describe the mistaken detection of meaning in random or meaningless data. But how then does one explain my friend writing to me this week when I was thinking about him a day or two before his email arrived, or how when I am battling with to understand a psychological concept the psychologist I work with posts an explanation of exactly what I am battling with (even if I have never told her I am working on understanding these concepts this week) on Facebook, or how the ‘feeling’ that one has that someone is watching one occurs, and when turning around one finds that they are indeed watching you. These may indeed be chance, and I may be suffering from ‘apophenia’, but the opposite may also be true.

I have been a scientist and academic for nearly thirty years now, and have developed a healthy scepticism and ‘nonsense-ometer’ for most theories and suggestions which seem outrageous and difficult to prove with rigorous scientific measurements (or the lack of them). But there is something in Carl Jung’s theories of the collective unconscious, archetypes and synchronicity that strike a deep chord in me and my ‘gut feel’ is that they are right, even though with our contemporary scientific measuring devices there is no way they can be either surely proved or disproved. Perhaps this is because I want to and enjoy ‘connecting’ with folk and is caused by some inherent psychological need or weakness in my psyche (or because I have chosen the wrong ‘archetype’ / my current sense of self does not ‘fit’ the life I have chosen and this creates a dissonance that makes me want to believe that Jung was right – how’s that for some real ‘psychobabble’!). But this morning my wonderful daughter, Helen (age 8), gave me a card she had made at school after all the girls in her class had been given a card template to colour in, and the general motif / image on the card (and I assume on all the printed cards) was that of a superman – it’s difficult not to believe that a chosen ‘hero’ motif does not provide evidence for an archetype when such is chosen by a school-teacher as what kids should use to describe their father (though surely myself like most dads are not deserving of such a description). This afternoon I will take the kids and dogs for a walk around the dam around where I live, and will very likely raise my hands to the water and wind and sky around me when I do so, as much as it is likely that the folk who will be going to church at the same time will be raising their hands to their chosen God, and those going to watch their team’s football match this afternoon will raise their hands to the sky when their team scores – all doing what surely generations of our ancestors did in the time before now. While we all appear to act so differently during out routine daily life, there is always a similar response amongst most folk (excluding psychopaths, but that is for another article / another day) to real tragedy, or real crises, or real good news, when it occurs, and so often folk will admit if pushed to that they appeal either to a ‘hero’ figure to protect or save them in time of danger, or a ‘mother’ figure to help ‘heal their pain’ after tragedy occurs, and these calls for help’ / succour are surely archetype related (and indeed it has been suggested that the image of God has been created as a ‘hero’ or ‘father’ figure out of an archetype by religious folk – though equally religious folk would say if there are archetypes, they may have been created in their God’s image).

Our chosen archetypes creates a filter and a prism through which life and folks behaviour might appear different, and indeed may be different, but at the level of the hypothesized ‘collective unconscious’, in all of us, there is surely similarity, and perhaps, just perhaps, as Jung suggests, we are all ‘one’, or at least that mystic bonds are indeed connecting us at some deep level of the psyche or at some energy level we currently don’t understand and can’t measure. How these occur or were generated as ‘universals’ as per the thinking of Jung and Plato, is perhaps for another day, or perhaps another generation, to explain. Unus mundus or Tabula rasa? Collective unconscious or unique individual identity? Mystic connecting bonds or splendid isolation? I’ll ponder on these issues as I push the ‘publish’ button, and send this out to all of you, in the hope that it ‘synchronises’ in some way with at least some of you that read it, though of course via Jung’s ‘mystic bonds’ you may already be aware of all I have written!


History and Historical Revisionism – Is What We Read Of The Past Ever A True Reflection Of How It Really Happened

This week I was alerted to a wonderful quote about books and reading – ‘the miracle of literature is that it can get you to understand, even a tiny bit, what it is like to be another human being’. My all-time favourite quote on reading before finding this one, goes ‘I read to realize I am not alone’. I have always been a ‘bookworm’ throughout just about all of my life. My mother often used to laugh in my pre-teen years, as whenever she called me, I would never hear her, as my head was always ensconced in a new book, and my mind always enthralled by tales created by great writers of the past to a level that the fiction of what I was reading often claimed more of my attention than that of the reality around me. As I grew older and became an academic, my taste for reading changed completely away from fiction to non-fiction, and I am sure whatever intelligence, viewpoints, perceptions and way of reasoning I have developed is mostly down to what I have read and absorbed from reading (obviously social interactions, particularly with significant role models, either negative or positive, played their part in my development too). A significant discussion with a highly respected old family friend, Simon Pearce, in my early thirties, when he said to me I should read history to best understand life, had a profound impact on me, and while always being interested in history, perhaps because of this advice from someone whose intellect I greatly admire, the last decade I have read a lot of history, and have indeed benefitted I think from doing so in many ways. But history, and the description of it, can be a ‘treacherous’ teacher, given that it is by nature reflective, dependent on the world-view and background of the historian who writes it, and a product of the contemporary zeitgeist of the current period of time in which it was or is written, and one therefore has to be very careful of how much one ‘believes’ of what one reads of history as being truly representative of the events as they happened in the times they describe.

History is defined as the study of past events, especially of human affairs. The word history is thought to have come from the Ancient Greek word ‘historia’, meaning ‘inquiry’ or ‘judge’. Historians are folk who write about history, and it is still controversial if a historian should be merely a chronicler or compiler of past events, or a critical analyst of them. Generally it is perceived that written documentation or transcripts of past events are necessary for historical accounts to be both assimilated and described, and events occurring prior to the presence of written records are described as ‘pre-historic’, and fall in to the realm of archaeological based academic work. We therefore have a relatively short period of historical ‘knowledge’, given that the first texts written have only survived from a few thousand years in the past, and the great period of human life and ‘history’ prior to these is virtually unknown, save what can be gleaned from archaeological digs and speculation from what is found in them. History is divided up into a number of fields of study, from a generic perspective which includes comparative history (historical assessment of social and culture entities that are not confined to national boundaries) and counterfactual history (the study of history as it might have happened had different circumstances arisen), and from a specific perspective includes the history of particular epochs of time or the history of specific human activities (such as military or economic history).

Academic researchers studying the field of history occupy themselves with identifying and solving the philosophical conundrums related to studying history such as what the correct ‘unit’ of study of the past is (for example is it the individual human condition, or the prevalent culture of the time, or the activities of the nation or state and how it impacted on the individual and other nations or states around them), and whether from history patterns or cycles of behaviour at either the individual or nation level can be determined. As described above, a ‘problem’ of history is that it is always written at a certain contemporary time, which will have a dominant social thinking and view of the past, and it is surely difficult for a historian not to be affected by this when writing their own account of whatever component of history they are involved with writing about. An even more post-modern view which has been suggested is that history as a concept is irrelevant from a generic perspective, as the study of history is always reliant on a personal interpretation of sources, and thus ‘history’ as a general concept is a redundant one. History writing itself often moves in ‘patterns’ of its own, with some epochs focussing more on ‘glorifying’ the successes of nations or ‘great’ individuals in history (and clearly many nations create ‘official’ historical publications as a way of glorifying their past, or justifying / ‘cleansing’ the more sordid components of their past) with subsequent epochs of history writing challenging these ‘glorious’ interpretations of history in a more dispassionate and reasoned way.

A good example of all of the interest of history as a subject, how it can be revised and manipulated for national or individual ‘gain’, and how with reflection a more balanced interpretation of the true nature of history is derived became evident to me after ‘studying’ from a reading of history perspective the role of Winston Churchill in World War Two. Churchill was, and perhaps still is, surely one of the most well-known figures in history in the Western world, and if you polled folk for their knowledge and opinion of him, they would say he was the person who saved Britain during the war, and / or led the country to ultimate triumph during the war in a heroic and masterful way (though even the knowledge of Churchill is becoming ‘dimmed’ with the passing of time as it does with all people). My own interest in him, and the World War Two period, stemmed from growing up in the 60’s and 70’s with a father who had an interest in military history and was for a short period of time in the civilian force military, and with the knowledge that a grandfather had fought in World War Two and was interned for a long period of it. On our bookshelf in the home of my youth were all Churchill’s volumes he himself wrote on the history of World War Two in the decade after it ended, and I remember with fondness many discussions with my Dad, or between him and his friends that I listened to way back then, describing or arguing about Churchill’s leadership during the war, and the merits of his place in the pantheon of successful military and political war leaders in general historical terms.

I had a mostly positive viewpoint regarding Winston Churchill and his part in ‘winning the war’ because of these early experiences of ‘history according to Dad’ for most of my life, until I started reading more carefully other accounts of the events during the war and Churchill’s part of them. The most startling of these accounts which very much changed my perspective on Churchill were the war diaries of General (later Field Marshal) Alan Brooke, who was the Chief of the Imperial General Staff (CIGS) and military leader of all Britain’s ground forces, and worked in tandem with Churchill who was the political leader. Diaries are fascinating, given that as long as they are not altered at a later point in time, they tell things ‘how they are’ on a daily basis, albeit with the particular viewpoint of the person writing them, and I have read and re-read Brooke’s diaries between 10 and 20 times to date (and they are about 1000 pages in length, so each doing so was surely a ‘labour of love’) given how astonishing the information described in them is. For what became clear to me when reading them, is that Churchill, in writing his own version of the ‘history of World War Two’ after it was complete, essentially wrote an autobiography giving his own interpretation of his own role during the war, and as such (like so many autobiographies) glorified his own role, attenuated or ignored his own responsibilities for the more sordid or disastrous events Britain suffered or was part of during the war, and perhaps most shamefully, was not generous in acknowledging the role of people around him in ‘winning the war’ (and I am talking person wise, rather than country wise – surely Russia can take almost 90 percent of the credit for ‘winning’ that war). Some of the most disastrous campaigns of the war – Norway and Greece for example – were shaped and driven by Churchill himself, yet from reading his books one would assume that the British and Allied Force Generals were almost solely to blame for these disasters, and that he was almost completely uninvolved in the strategic or tactical decisions that led to them. Throughout the war he constantly tried to push forward strategically appalling choices for campaigns – one example being his constant ‘push’ for an expedition against the ‘northern tip’ of Sumatra – which his military staff had to work daily to resist him initiating, and which would have dispersed the forces available in a disastrous ‘minor campaign’, similar to the Gallipoli and Antwerp campaigns in World War One, of which Churchill was similarly the architect. It is astonishing to read Brooke’s diary (and the diaries or personal war accounts of a number of other military and political staff of that time, most of which validate Brooke’s diary account of the war) to see how many times his advisors and folk like Brooke had to spend most of their day ‘heading off’ or convincing Churchill not to continue with his wild schemes, rather than what appeared to be the case when reading Churchill’s own written accounts of World War Two, when it appeared as if Churchill was the architect of all successes, and his military staff merely carried out his great ideas. And this is to say nothing about Churchill’s role in the area bombing of Germany, or his astonishing ‘imperial’ (a nice word for racially biased) views on India, or his personal habits, or injudicious views on most subjects freely imparted to all and sundry on an almost continuous basis. If he was a politician in modern times, with the current daily media scrutiny they face, he would surely not have lasted more than a few days before having to resign in disgrace and shame as a result of his utterances and behaviour as a Prime Minister as he did in those times back then.

All of the fascinating and enjoyable time I have spent reading about this topic, apart from being a relaxation ‘tool’ in itself, did indeed, as our great old family friend Simon Pearce said it would, teach me a whole lot of lessons about not just history, but life itself. Firstly, it taught me that the character of any ‘great’ person, or indeed any person, is surely complex, and while someone like Winston Churchill surely had a number of attractive and positive traits, he also had a lot of negative and extremely selfish traits that unless carefully ‘looked for’ would not ‘reach the light of day’ when reading most historical accounts either of his life or that of World War Two. Secondly, it taught me that one needs to be cautious in believing only one account of anything, least of all the person who is the one telling the story / giving the account of how things happen. Thirdly, it taught me that history if often created by those involved in it who write about it afterwards in a way that will benefit that person themselves in an unduly positive way (as they say, history is mostly written by the ‘winners’ of any event being written about). Fourthly, it taught me not to put anyone on a pedestal from reading about past events that they were involved in – as my great current work mentor, Professor Nicky Morgan, often reminds me, even the greatest leaders have ‘feet of clay’. Fifthly, it taught me never to have a fixed paradigm about anything from the past – my own interpretation of and ‘feeling for’ this period of history was very different in the time of hearing about the events then as told by my father, or reading Winston Churchill’s own books about World War two as a teenager, compared to the more complex, less positive perspective I have of Winston Churchill and the events occurring during World War Two today, thanks to a reasonably extensive reading of different sources of information of events occurring at that time in the last few years. Finally, it made me think about the importance of diaries – a long lost ‘art’ that perhaps needs to be revived – there is much to be gained from keeping a daily diary about events. If Alan Brooke had not spent a few minutes before bed each night writing up a description of his daily life working in close proximity to Winston Churchill in his diary, we would be the poorer for not having it, and our understanding of events way back then would remain simplistic and perhaps unbalanced.

There are surely, therefore, a lot of lessons one can learn not just about history and historical revisionism (as Churchill’s own post-war writings of events surely were), but also in understanding contemporary life and how in describing it some folk who want to personally gain from the telling of it, may be able to do so by how they subjectively describe events of which they were part. There is surely a positive gain from keeping a daily or weekly diary, so that one can be to a greater degree sure of one’s own history, or at least of the events happening during a particular period of time from one’s past if one wishes to review it, than if one did not have a recorded history of it. Equally, one surely needs to be aware when reading the ‘official’ history of any person, organization, community or nation state, that it may be written with potentially (some would say surely) an either subconscious or conscious / overt or covert bias (as much as it should also be remembered that each time one personally reflects on or writes about an experience one has been part of, it will surely also have one’s own particular bias and perspective), and should therefore always be read with caution. Reading, and for me particularly reading about history, is both one of the most enjoyable activities that I can ever do, and the activity that I learn most from, but I know that a lot of what I read, particularly biographies, and certainly autobiographies, need to be read with a large ‘pinch of salt’. So when I am done with writing this, I will surely look forward to later today taking up again the current historical tome I am enjoying reading. But, surely, I will read it with our salt-shaker very close to me!


Control of Movement And Action – Technically Challenging Conceptual Requirements And Exquisite Control Mechanisms Underpin Even Lifting Up Your Coffee Cup

During the Christmas break we stayed in Durban with my great old friend James Adrain, and each morning I would as usual wake around 5.00 am and make a cup of coffee and sit outside in his beautiful garden and reflect on life and its meaning before the rest of the team awoke and we set off on our daily morning bike-ride. One morning I accidentally bumped my empty coffee mug, and as it headed to the floor, my hand involuntarily reached out and grabbed it, saving it just before it hit the ground. During the holiday I also enjoyed watching a bit of sport on the TV in the afternoons to relax after the day’s festivities, and once briefly saw highlights of the World Darts Championship, which was on the go, and was struck by how the folk competing seemed with such ease, and with apparent similar arm movements when throwing each dart, to be able to hit almost exactly what they were aiming for, usually the triple twenty. When I got back home, I picked up from Twitter a fascinating article on movement control posted by one of Sport Sciences most pre-eminent biomechanics researchers, Dr Paul Glazier, written by a group of movement control scientists including Professor Mark Latash, who I regard as one of the foremost innovative thinkers in the field of the last few decades. All of these got me thinking about movement control, and what must be exquisite control mechanisms in the brain and body which allowed me to in an instant plan and enact a movement strategy which allowed me to grab the falling mug before it hit the ground, and allowed the Darts Championship competitors to guide their darts, using their arm muscles, with such accuracy to such a small target a fair distance away from them.

Due to the work over the last few centuries of a number of great movement control researchers, neurophysiologists, neuroscientists, biomechanists and anatomists, we know a fair bit about the anatomical structures which regulate movement in the different muscles of the body. In the brain, the motor cortex is the area where command outflow to the different muscles is directly activated, and one of the highlights of my research career was when I first used transcranial magnetic stimulation, working with my great friend and colleague Dr Bernhard Voller, where we able to make muscles in the arms and leg twitch by ‘firing’ magnetic impulses into the motor cortex region of the brain by holding an electromagnetic device over the scalp above this brain region. The ‘commands for action’ from the motor cortex travel to the individual muscles via motor nerves, using electrical impulses in which the command ‘code’ is supplied to the muscle by trains of impulses of varying frequency and duration. At the level of the individual muscles, the electrical impulses induce a series of biochemical events in and around the individual muscle fibres which cause them to contract in an ‘all or none’ way, and with the correct requested amount of force output from the muscle fibre which has been ‘ordered’ by the motor cortex in response to behavioural requirements initiated in brain areas ‘upstream’ from the motor cortex, such as one’s eyes picking up a falling cup and ‘ordering’ reactive motor commands to catch the cup. So while even though the pathway structures from the brain to the muscle fibres are more complex than I have described here – there are a whole host of ‘ancient’ motor pathways from ‘lower’ brainstem areas of the brain which also travel to the muscle or synapse with the outgoing motor pathways, whose functions appear to be redundant to the main motor pathways and may still exist as a relic from the days before our cortical ‘higher’ brain structures developed – we do know a fair bit about the individual motor control pathways, and how they structurally operate and how nerve impulses pass from the brain to the muscles of the body.

However, like everything in life, things are more complex than what is described above, as even a simple action like reaching for a cup, or throwing a dart, requires numerous different muscles to fire either synchronously and / or synergistically, and indeed just about every muscle in the body has to alter its firing pattern to allow the body to move, the arm to stretch out, the legs to stabilize the moving body, and the trunk to sway towards the falling cup in order to catch it. Furthermore, each muscle itself has thousands of different muscle fibres, all of which need to be controlled by an organized ‘pattern’ of firing to even the single whole muscle. This means that there needs to be a coordinated pattern of movement of a number of different muscles and the muscle fibres in each of them, and we still have no idea how the ‘plan’ or ‘map’ for each of these complex pattern of movement occurs, where it is stored in the brain (as what must be a complex algorithm of both spatial and temporal characteristics to recruit not only the correct muscles, but also the correct sequence of their firing from a timing perspective to allow co-ordinated movement), and how a specific plan is ‘chosen’ by the brain as the correct one from what must be thousands of other complex movement plans. To make things even more challenging, it has been shown that each time one performs a repetitive movement, such as throwing a dart, different synergies of muscles and arm movement actions are used each time one throws the dart, even if to the ‘naked’ eye it appears that the movement of the arm and fingers of the individual throwing the dart seems identical each time it is thrown.

Perhaps the scientist that has made the most progress in solving these hugely complex and still not well understood control process has been Nikolai Bernstein, a Russian scientist working out of Moscow between the 1920’s and 1960’s, and whose work was not well known outside of Russia because of the ‘Iron Curtain’ (and perhaps Western scientific arrogance) until a few decades ago, when research folk like Mark Latash (who I regard as the modern day equivalent of Bernstein both intellectually and academically) translated his work into English and published it as books and monographs. Bernstein was instructed in the 1920’s to study movement during manual labour in order to enhance worker productivity under the instruction of the communist leaders of Russia during that notorious epoch of state control of all aspects of life. Using cyclographic techniques (a type of cinematography) he filmed workers performing manual tasks such as hitting nails with hammers or using chisels, and came to two astonishing conclusions / developed two movement control theories which are astonishingly brilliant (actually he developed quite a few more than the two described here), and if he was alive and living in a Western country these would or should have surely lead to him getting a Nobel prize for his work. The first thing he realized was that all motor activity is based on ‘modelling of the future’. In other words, each significant motor act is a solution (or attempt at one) of a specific problem which needs physical action, whether hitting a nail with a hammer, or throwing a dart at a specific area of a dartboard, or catching a falling coffee cup. The act which is required, which in effect is the mechanism through which an organism is trying to achieve some behavioural requirement, is something which is not yet, but is ‘due to be brought about’. Bernstein suggested that the problem of motor control and action therefore is that all movement is the reflection or model of future requirements (somehow coded in the brain), and a vitally useful or significant action cannot either be programmed or accomplished if the brain has not created pre-requisite directives in the forms of ‘maps’ of the future requirements which are ‘lodged’ somewhere in the brain. So all movement is in response to ‘intent’, and for each ‘intent’ a map of motor movements which would solve this ‘intent’ is required, a concept which is hard enough to get one’s mind around understanding, let alone working out how the brain achieves this or how these ‘maps’ are stored and chosen.

The second of Bernstein’s great observations was what is known as motor redundancy (Mark Latash has recently suggested that redundancy is the wrong word, and it should have been known as motor abundancy), or the ‘inverse dynamics problem’ of movement. When looking at the movement of the workers hitting a nail with a hammer, he noticed that despite them always hitting the nail successfully, the trajectory of the hammer through the air was always different, despite the final outcome always being similar. He realized that each time the hammer was used, a different combination of arm motion ‘patterns’ was used to get the hammer from its initial start place to when it hit the nail. Further work showed that each different muscle in the arm was activated differently each time the hammer was guided through the air to the nail, and each joint moved differently for each hammer movement too. This was quite a mind-boggling observation, as it meant that each time the brain ‘instructed’ the muscles to fire in order to control the movement of the hammer, it chose a different ‘pattern’ or ‘map’ of coordinative muscle activation of the different muscles and joints in the arm holding the hammer for each hammer strike of the nail, and that for each planned movement therefore, thousands of different ‘patterns’ or ‘maps’ of coordinated muscle movement must be stored, or at least available to the brain, and a different one appears to be chosen each time the same repetitive action is performed. Bernstein therefore realized that there is a redundancy, or abundancy, of ‘choice’ of movement strategies available to the brain for each movement, let alone complex movement involving multiple body parts or limbs. From an intelligent control systems concept, this is difficult to get one’s head around, and how the ‘choice’ of ‘maps’ is made each time a person performs a movement is still a complete mystery to movement control researchers.

Interestingly, one would think that with training, one would reach a situation where there would be less motor variability, and a more uniform pattern of movement when performing a specific task. But, in contrast, the opposite appears to occur, and the variability of individual muscle and joint actions in each repetitive movement appears to maintain or even increase this variability with training, perhaps as a fatigue regulating mechanism to prevent the possibility of injury occurring from potentially over-using a preferentially recruited single muscle or muscle group. Furthermore, the opposite appears to happen after injury or illness, and after for example one suffers a stroke or a limb ligament or muscle tear, the pattern of movements ‘chosen’ by the brain, or available to be chosen, appears to be reduced, and similar movement patterns occur during repetitive muscle movement after such an injury, which would also be counter-intuitive in many ways, and is perhaps related to some loss of ‘choice’ function associated with injury or brain damage, rather than damage to the muscles per se, though more work is needed to understand this conceptually, let alone functionally.

So, therefore, the simple actions which make up most of our daily life, appear to be underpinned by movement control mechanisms of the most astonishing complexity, which we do not understand well (and I have not even mentioned the also complex afferent sensory components of the movement control process which adjust / correct non-ballistic movement). My reaction to the cup falling and me catching it was firstly a sense of pleasure that despite my advancing age and associated physical deterioration I still ‘had’ the capacity to respond in an instant and that perhaps the old physical ‘vehicle’ – namely my body – through which all my drives and dreams are operationalized / effected (as Freud nicely put it) still works relatively okay, at least when a ‘crisis’ occurs such as the cup falling. Secondly I felt the awe I have felt at many different times in my career as a systems control researcher at what a brilliant ‘instrument’ our brains and bodies as a combination are, and whatever or whoever ‘created’ us in this way made something special. The level of exquisite control pathways, the capacity for and of redundancy available to us for each movement, the intellectual capacity from just a movement control perspective our brain possesses (before we start talking of even more complex phenomena such as memory storage, emotional qualia, and the mechanisms underpinning conscious perception) are staggering to behold and be aware of. Equally, when one sees each darts player, or any athlete performing their task so well for our enjoyment and their success (whether darts players can be called ‘athletes’ is for another discussion perhaps), it is astonishing that all their practice has made their movement patterns potentially more rather than less variable, and that this variability, rather than creating ‘malfunction’, creates movement success and optimizes task outcome capacity and performance.

It is in those moments as I had when sitting in a beautiful garden in Durban in the early morning of a holiday period, reflecting on one’s success in catching a coffee cup, that creates a sense of wonder of the life we have and live, and what a thing of wonder our body is, with its many still mystical, complex, mostly concealed control processes and pathways regulating even our simple movements and daily tasks. In each movement we perform are concealed a prior need or desire, potentially countless maps of prospective plans for it, and millions of ways it can be actualized, from which our brain chooses one specific mechanism and process. There is surely magic in life not just all around but in us too, that us scientist folk battle so hard to try and understand, but which are to date still impenetrable in all their brilliance and beauty. So with a sigh, I stood up from the table, said goodbye to the beautiful garden and great friends in Durban, and the relaxing holidays, and returned to the laboratory at the start of the year to try and work it all out again, yet knowing that probably I will be back in the same place next year, reflecting on the same mysteries, with the same awe of what has been created in us, and surely still will no further to understanding, and will still be pondering, how to work it all out – though next year I will be sure to be a bit more careful where I place my finished coffee cup!


The Brain, The Mind, And Me – Where Are ‘We’ In The Convoluted Mass Of Neurons We Call The Brain

This week I had some fun time getting some basic research projects on the go, a welcome break from my now almost full time management life, as much as I enjoy it. I was asked to comment on a theoretical article that suggested that the prefrontal cortex is important in pacing and fatigue processes, and write a review article on brain function regulating activity, by my good friend and world leading physiologist and exercise scientist, Professor Andy Jones, and both of these got the neurons firing in pleasant way. For most of my career I have been a researcher, and while I describe my main research interest as understanding generic regulatory control mechanisms when asked about it, my research passion has always been the brain and how it functions, and creates ‘us’ and what we see, feel and experience as ‘life’. I will never forget the ‘buzz’ I got when working at the NIH in Washington DC, with Austrian neurologist without peer, Dr Bernhard Voller, when we put a needle electrode into one of the muscles controlling eye movement of a subject and heard the repetitive ‘clicks’ of each action potential as it fired in order to control the muscle’s movement, or when we used transcranial magnetic stimulation to stimulate the motor cortex (basically a magnet placed on the skull which ‘fires’ electromagnetic waves into the brain) and saw muscles in the finger or foot twitching when we selectively targeted different regions of the motor cortex. I will never forget the feeling of excitement when with Dr Laurie Rauch at the University of Cape Town we first got good quality EEG traces from folk and saw the EEG change frequency and complexity when someone put their hands on the folk being tested. I will also remember the wonder I felt (tinged with a degree of sadness for the rats which were sacrificed) working with Professor Viv Russell and Dr Musa Mabandla when we saw quantitative differences in neurotransmitter levels in areas of the brain of rats associated with motivation and drive that had run to exhaustion compared to more ‘lazy’ ones that simply refused to run as much as others did. Having said all this, it’s amazing that after all these years, and so much research performed on the brain by so many top quality folk all round the world, we still have almost no idea of how the brain functions, how and where the mind is and how it relates to the physical brain structures and processes, and where ‘we’ and our ‘soul’ are in relation to this most complex organ in our body.

As everyone knows, the brain is an odd shaped organ situated in the skull which consists of billions of neurons which connect with each other and with nerve fibres that ‘flow’ out to the body and which regulate all our body systems, processes and functions. Information is sent through neurons via electrical signals (called action potentials) which create ‘coded’ messages and commands. In a somewhat strange organizational structural process, there is a gap between each neuron where they connect to each other (called a synapse) and chemical substances called neurotransmitters fill the gap when electrical activity comes down the neuron and allows the ‘message’ to be transferred to the next neuron with great fidelity, though the synaptic neurotransmitter activity can also amplify, moderate or attenuate the signal passing through it, in a manner which is still not well understood, but related to the type of neurotransmitter that is secreted at the synapses. The brain also ‘secretes’ chemical substances such as hormones and regulatory factors that go via the bloodstream to various peripheral organs in the body and can control their function in a slower but longer acting way.

Though we do have some knowledge of basic output and input functions of the brain, such as vision and hearing processes, and sensory inputs and motor outputs from and to the body, there is currently no unifying theory of how neural activity in the brain works in its entirety to control or create the complex activities associated with life as we know it, such as thinking, memory, desire, awareness or even basic consciousness. Before the 1700’s it was assumed that the brain functioned as a type of ‘gland’, based on the theories of the Greek physician Galen. In his model, the nerves conveyed fluids from the brain to the peripheral tissues (so he was right at least about the ‘secretory’ function of the brain). In the 1800’s, using staining techniques and the (then) recently developed light microscope, Cayal and Golgi showed that neural tissues were a network of discrete cells, and that individual neurons were responsible for information processing. Around the same time Galvani showed that muscles and neurons produced electricity, and von Helmholtz and other German physiologists showed that electrical activity in one nerve cell affected activity in another neuron it was in contact with via a synapse in a predictable manner. Two conflicting views of how the brain uses these electrical-based neuronal systems to send commands or information were developed in the 1800’s. The first was reductionistic, suggesting that different brain regions control specific functions. This concept was based on the work of Joseph Gall, who also suggested that continuous use of these different brain regions for specific tasks caused regional hypertrophy (increased size). Gall suggested that this regional brain hypertrophy created bulges in the skull, which could be associated with the specific function of the underlying brain tissue. While the skull theory of his has ‘fallen by the wayside’, in later years Brodmann described 52 separate anatomically and functionally distinct regions of the cortex, and Hughlings Jackson showed that in focal epilepsy, convulsions in different parts of the body were initiated in different parts of the cerebral cortex. These findings were supported by the work of Penfield, who used small electrodes to stimulate different areas of the motor cortex in awake neurological patients and induced movements in different anatomical regions of the body (similar to the work we did at the NIH, albeit we did it in a more indirect / less invasive way).

The second and opposing view was that all brain regions contribute to every different mental task and motor function in an integrative and continuous manner. This theory was based on the work of Flourens in the 1800’s, and was described as the aggregate field theory. Recent research has shown that large areas of the brain communicate with each other continuously using electromagnetic waves of different frequency during any task. Further support for the aggregate field theory comes from the concept that no activity is ever simple, and even a ‘simple’ motor task such as moving the hand to get something is the final common output of multiple behavioural demands such as emotional context, prior experience, sensory perception and homeostatic requirements, and therefore cannot be attributed solely to any specific / single region, except from a final output perspective. While the aggregate field theory fell into disfavour in the late 1900’s, due to the development of MRI, CT and PET scanning of the brain and these techniques ascendancy to being ‘the in thing’ in neuroscience / brain research over the last few decades, the ‘snapshot’ methodology associated with brain scanning using MRI and these other image-based techniques have contributed very little real understanding of how the brain functions, apart from creating ‘pretty’ pictures that show that certain brain regions ‘light up’ whenever a task is performed. Unfortunately, often the same areas of the brain are shown to be active when using these techniques during very different testing protocols, which creates a confusing and complex ‘picture’ of what is happening in the brain during even simple tasks.

Incredibly therefore, we still know so little about how the brain works that this basic argument between ‘regional’ versus ‘general’ brain functionality has not been resolved, despite all the technological development in the last few decades, such as those described above. Even more mysterious is how the ‘mind’ works, and there is still active debate of what the ‘mind’ is, and how it relates to physical brain structures. The mind is defined as the cognitive faculties that enables consciousness, perception, thinking, judgement and memory. As will be obvious, this definition of the mind can at best be described as a conceptually ‘hazy’ one and does not help much clarify things, but basically the mind is what ‘we’ are – the ‘me’ that makes our life feel as if it is ‘ours’ and that we are unique and our experiences and thoughts are ‘our own’. The debate still rages about whether the mind is ‘in’ – the monist or materialist doctrine – or ‘out’ of the brain tissue – the ‘dualist’ doctrine. The monist / materialist doctrine posits that everything we think, feel and ‘are’ can and is found in the functioning and activity of the neural cells and neurons in the brain. The dualist theory posits that ‘we’ are an immaterial ‘spirit’, as described by Rene Descartes, that is related to but exists ‘out’ of the brain and body – a more spiritual interpretation of what ‘we’ are and a theory which allows for concepts such as ‘soul’. The clearest evidence for a strong relationship between physical brain matter and the mind is the effect of physical agents such as psychoactive and anaesthetic drugs and alcohol on the ‘mind’, and the effect of traumatic brain injury in certain areas of the brain on mental function and the mind. But, given that we have absolutely no idea where memories are stored in the brain or how they are stored, how ‘thought’ happens, or even what consciousness ‘is’, it is difficult to completely refute the dualist approach, even as a hard-nosed scientist, although it is the ‘death sentence’ for many a neuroscientist’s career for one of us to suggest a belief in dualism is a scientifically possible entity. Religion is perhaps in many ways a derivative of this ‘explanatory gap’ between mind and brain function, and will continue to flourish until science eventually (if possible) proves the materialist / monist theory to be true or refutes the dualist theory with more evidence than we currently have.

So what do we know of the brain, the most brilliant and puzzling organ in the body, and its function. Sadly, after 25 years studying it, I have to be honest and say almost nothing, and anyone who says differently, is not being honest or is deluding themselves (which of course would be an irony of note). Us neuroscientists are in many ways beholden to and ‘straightjacketed’ in developing our brain and mind theories by the laboratory investigative techniques that are currently available which allow us to examine whatever our area of interest is, and unfortunately in the brain research area, these techniques are just not subtle enough, or conversely not complex enough, to allow us to have any more understanding today of how the brain works than in many ways was known one or two hundred years ago. It’s amazing that we know so much about heart, liver, and muscle function – indeed any organ of the body – and so little about the brain, which is such a seemingly impenetrable mystery. Most neuroscientists like myself eventually focus on examining specific areas of brain or mind function, perhaps to protect ourselves from a sense of being abject failures in our chosen discipline – which is why I describe my main area of interest as control theory rather than ‘brain function’ research to those that ask. But, it surely will be some scientist, working with some new recently developed piece of equipment that we are not yet aware of, that will have the ‘eureka’ moment for neuroscience similar to what occurred with genetics / molecular biology in the 1950’s with the breakthrough in the understanding of the structure of DNA that in turn led to how quickly molecular biology developed in the subsequent 60 years to its current status, and we will have a clear understanding of how the brain works and how the mind fits in to the puzzle. Whoever does have this ‘eureka’ moment will very much deserve their Nobel Prize. Until that time, I, and probably most research folk who are interested in basic brain function, will keep on telling our new neuroscience / physiology / exercise science students each year that as scientists us neuroscientists are dismal failures / the least successful of all the research folk working in academia, given how little of brain and mind function we know and understand, despite all our valiant endeavours and countless hours in the lab, trying to work it all out.

But, having said this, I will also continue to marvel at the brain and mind, and be thankful that I had, and have, the career from a research perspective that I do, trying to work out and understand something as truly amazing as the brain, which is the ‘root of all life as we know it’, whatever life and our part of it is. In the mass of neurons in our brains, which work in some mysterious way and using codes we still need to ‘crack’, ‘we’ exist, feel, live and die. Unless of course the dualists are right, and us research folk have been fooling ourselves for a long time, and ‘we’ are just spirits that exist in our bodies for the length of time we are alive, before heading off for another adventure, either in another body, or in another world. Time, hard work, and perhaps a good dose of luck, will allow us neuroscience folk to eventually have a definitive opinion either way – however at this point in time, the mind is willing, but the contemporary brain appears to be too ‘weak’ to make or find that elusive ‘eureka’ breakthrough and know what itself, and indeed ‘we’, are all about!


Courage Under Fire – Both Physical And Moral Courage Are Complex Phenomena

The week past at work was a challenging one, with a number of different issues to deal with that were and are complex, and perhaps more political and moral than medical, and all which needed, or do need, some moral courage to resolve. I have also been reading the autobiography of American president and most famous civil war general, Ulysses S. Grant (Personal Memoirs), and watched a video called American Sniper, about the USA’s most successful sniper, who showed great physical bravery, albeit in a morally challenging environment. The advertising for the video suggested that it would be a great father’s day gift, but while being thought-provoking, I was left feeling distinctly ‘queasy’ after watching it, for a number of moral / ethical reasons (which the film subtly attempted to address). All three ‘events’ this week got me thinking about courage, how it is defined and what it really is. The dictionary definition of courage is the ability to disregard fear. Clearly, courage is related to fear, or at least resisting the life-preserving emotion which the sensation of fear essentially is. Courage is also defined as the choice and willingness to confront agony, pain, danger uncertainty and / or intimidation, and not ‘back away’ from any of these challenges. There are also perhaps different types of courage, with two broad categories being physical courage, which is defined as courage in the face of physical pain, hardship, death or threat, and moral courage being defined a the ability to act ‘rightly’ in the face of popular opposition or the potential for shame, scandal or discouragement to be the consequence of enacting one’s moral standpoint. But, many acts of heroism and many human actions which have been defined as being courageous may be rooted in activity which is self-serving, or may occur in individuals who do not ‘feel’ fear to the degree that most folk do. In these cases the concept of courage becomes more complex, and may be underpinned by human impulses not as noble as they would be if ‘pure’ courage was the ultimate source of the actions.

To understand courage one also has to understand and acknowledge the existence of fear. Fear is defined as an emotion induced by a threat perceived to be a risk to life, status, power, security, wealth or anything perceived to be valuable to the individual who becomes aware of the threat, which causes changes in brain and organ function, and ultimately behavioural changes, such as freezing, hiding, running away from, or confronting, the source of the fear in order to attenuate it by removing the threat. There are physical symptoms of fear, including increased breathe rate, heart rate, increased muscle tension, ‘goose bumps’ and raised hair follicles, sweating, increased blood glucose, sleep disturbances and dyspepsia (nausea and ‘butterflies in one’s stomach’). All of these changes serve purposive functions, and result from primitive protective functions known as the ‘fight or flight response’, which make the individual ‘ready’ to either flee or fight the danger which causes the development of these symptoms, with the sensation of all of these changes as a collective becoming the ‘feeling’ of the emotion we call fear. Fear is an important life-preserving complex emotion, without which both humans and animals would not last long in either wild or modern environments. It’s important to note that not all people ‘feel’ fear, for example sociopaths and psychopaths do not, while in some folk fear is felt to extreme levels, where it is defined as a phobia. A 2005 Gallup poll of adolescents in the USA between the ages of 13 and 17 suggested that that the top 10 fears of the folk interviewed were, in order, terrorist attacks, spiders, death, being a failure, war, criminal or gang violence, being alone, the future, and nuclear war. A further analysis of top ten online searches with the phrase ‘fear of…’ by Bill Tancer in 2008 described fear of flying, heights, clowns, intimacy, death, rejection, people, snakes, failure, and driving as being the most searched for. It is clear from these that folk have fear for a wide variety of ‘thing’s, some personal, some social, some physical and some psychological.

As described above, the ability to ‘stand up to’ one’s own personal fears, whatever these are, is described as courage. Courage appears to be a ‘learnt’ behavioural trait, with most folk remembering with clarity the first occasion they showed physical courage / stood up to the local bully that was tormenting them, and understood what it meant by doing so. For example, in his excellent book on Courage, the previous Prime Minister of the UK, Gordon Brown, could pinpoint / remembered the age / date / time of the situation that required him to be courageous and which made him aware of it as a concept. In the classic book on courage written by Lord Charles Moran (also well known for being Winston Churchill’s personal physician during World War Two), titled The Anatomy of Courage, four ‘orders’ of people were described based on how they showed physical courage, or the lack of, according to his observations of soldiers under fire in World War One. These were firstly people who did not feel fear (today these would be called sociopaths/psychopaths), secondly people who felt fear but did not show it, thirdly people who felt fear and showed it but did their job, and fourthly people who felt fear, showed fear, and shirked their responsibilities. He perceived that level of fatigue or length of exposure to situations which induced fear (such as constant shelling during World War One) could ‘wear out’ any person, and could lead to any person changing who were in one of the first three categories to eventually ‘fall into’ the fourth. He suggested that imaginative / intelligent (sic) folk felt fear more than unimaginative ‘bovine’ individuals (one could add sociopaths to this latter group, though he did not discuss them), and that it was more challenging for ‘imaginative’ folk to show courage because of this, and perhaps more exemplary when they did. Finally, he felt courage was all ‘in the head’, and that moral courage was one step ‘higher’ than physical courage, and needed even greater ‘levels’ of whatever it was that created courage in someone to occur, and that ‘few men had the stuff of leadership (moral courage) in them, they were like rafts to which all the rest of humanity clung for support and for hope’

Moral courage is usually understood and enacted later in life, often when one is in a leadership position for the first time. For Ulysses S. Grant, it was in 1861, when being confronted by a rival ‘rebel’ force led by a General whom he knew. Grant felt terrified that if he ordered an attack it could potentially fail and he would be both blamed for and responsible for it. He perceived that, as had been the case in his military career to that point in time, if he was one or any rank lower than the General in command, he would have no hesitation on acting on the orders given, but that it was very different when all the responsibility for success or failure rested on him, and for the first time he felt ‘moral fear’. In a life changing moment for him, and perhaps the history of the USA, when he finally ordered the attack, his troops found that the rebel General and his troops had deserted their camp and retreated, and Grant realized that his opposite number was as fearful as he was and had acted on this fear before Grant had. In Grant’s words, ‘From that point on to the close of the war, I never experienced trepidation upon confronting an enemy, though I always felt more or less anxiety. I never forgot that he had as much reason to fear my forces as I had his. The lesson was valuable.’ While some historians have pointed out that Grant may have taken this lesson too much to heart, and that he should have respected his enemies capacities more in later engagements, a lack of which perhaps resulted in high levels of blood-letting in all the future battles Grant led, ultimately it was his moral courage that led to the war being won for the United States’s Union armies.

It must be noted that to take a stance or way of leading that requires moral courage requires a belief that there are virtues higher than ‘natural’ ones that needs to be protected, as the philosopher Hobbes pointed out. In the example of Grant, he was fortunate to win the war and be famous because of it, and in his case he was on the side of ‘good’, from the context that the American civil war, while starting out ostensibly as a conflict about states staying in or withdrawing from the Union, was really about slavery and its abolishment in the rebel states, and there are few folk that would not agree that the Union cause, and therefore that of Grant’s, had the moral high ground in the conflict. There are other examples, such as religious or national wars, where the issue of moral courage because more ‘cloudy’, as when folk take a stand, maintain a conflict or start a war against other folk due to some religious or national belief or doctrine, which could be defined as morally courageous (and indeed physically too) from that person’s or nation’s perspective, but would be defined by other folk as being that of a zealot or being misguided courage as best. There are also innumerable folk in history who took a morally courageous standpoint and ended up on the ‘losing’ side or died for their standpoints, or whose morally courageous standpoint was in the context of a greater morally corrupt environment, and for which they received no reward or respect for doing so. An example of this would be the Japanese Kamikaze pilots during World War Two, who sacrificed their own lives by crashing their planes into Allied ships in order to save the Japanese empire. These folk must have been hugely brave, and believe their stance was morally correct (Japanese dogma during the war was that it was the Allies, rather than Japan that were the aggressors). But most folk would now say, and said then, that the cause they were dying for was morally bereft. So for folk like these Kamikaze pilots, doing what was for them both a physically and morally courageous thing had no ‘upside’ in the long term. While this example is an extreme one, it perhaps does help explain why it is often so difficult to be morally brave in times where those against whom one takes a morally courageous standpoint are much stronger than the individual taking the morally courageous stand, or when the moral standpoint is perceived to be one which other folk believe is actually an immoral one, or later will declare it to be so, either for genuine or political reasons (and history is always written by the winners of any conflict or debate).

So how does this all help with the decisions on a daily basis that one has to make, and surely most folk do, that are complex, have many issues, and require moral courage to take a particular viewpoint or decide to enact a particular change that will not go down well with most folk one works with or interacts with, even if it is perceived by oneself as being the morally correct one. Firstly, one needs to think very carefully about the issue that is requiring a decision or action, to be sure that one is making a difficult decision with the highest level of certainty in the correctness of one’s decision as one can possibly be. Second, one has to be aware of one’s one moral ‘blind spots’, and that one is not doing something for personal gain or one’s own benefit when making a tough decision involving others or big groups of people that could be affected by one’s decisions. Thirdly, one has to valence the viewpoint, desires or ethical beliefs of a particular group of people, about whom the decision needs to be made or action taken, or are influencing one to make a decision, to be sure they are not out of kilter with the viewpoints of the greater society in general. Lastly, one has to be clear about the consequences of each potential decision, and whether one can live with these, even if it means a change to one’s lifestyle and circumstances which may affect not just oneself, but one’s family and loved ones, who will suffer if one is fired or even killed for taking a morally courageous standpoint. There are two opposing moral courage perspectives that could occur or be needed in each decision, firstly to be morally brave from a societal or situational perspective, or secondly to be morally brave in protecting one’s family and loved ones by not taking the morally brave societal or situational perspective. So being morally courageous can often be both complex and paradoxical. Ultimately, one has to decide each time one is faced with a challenging situation that produces a fear of consequences, whether to avoid it, or to act. To not act is often prudent. To act requires moral courage, but as above, moral courage is often complex. As Pastor Martin Niemoller’s haunting words remind us ‘First they came for the Socialists, and I did not speak out – because I was not a Socialist. Then they came for the Trade Unionists, and I did not speak out – because I was not a Trade Unionist. Then they came for the Jews, and I did not speak out – because I was not a Jew. Then they came for me – and there was no one left to speak for me’. Unless one is a sociopath, each of us feels fear the same as everyone else. Each one of us has to learn first physical courage, and later moral courage. Each one of us on a daily basis has decisions to make which require either physical or moral courage. Each decision we make, or do not make, causes ripples that effect both our lives and those around us. In a complex world, full of complex issues, especially where there is no clear wrong and right, or paradoxically particularly when it is obvious what is wrong and right, physical courage, and perhaps even more so moral courage, is often all that stands in the way between societal annihilation and salvation, and perhaps more importantly, underpins us attaining our own state of grace, whatever its level or importance or influence. To be is to do. Or was that to do is to be?


%d bloggers like this: