The aim of the literature review is to examine previous research and theoretical discussion on existing theories of development, with a particular emphasis on how they relate to an individual’s awareness of self in the moment. The intention is to discover what unites and separates these various theories and approaches in order to form an opinion on self-construction from the perspective of Intention, Awareness, Choice and Response.
The review is broken down into four key sections that correspond to the four key theories that inform and frame this research: Stage Development; Stage Transition; Intelligence, and Meta-Programmes. See Figure L.2. The literature within each of the sections will be discussed as it applies to the research question in this thesis.
Figure L.1: Literature Review Flow
The focus of this study is an examination of these different approaches from an adult development perspective. In order to address the theoretical gaps in the adult development literature, the current chapter will focus on exposing the limitations of the existing theories, including the human elements of Laske’s (2008) framework, the adult extensions to Piaget’s (1932) work, the removal of scales from development in general, the movement between stages as they occur, rather than by description, and the use of heuristics as unconscious shortcuts for self-construction, which ultimately impacts personality.
The focus of this study is an examination of these different approaches from an adult development perspective. In an attempt to address the theoretical gaps in the adult development literature, the current chapter will focus on exposing the limitations of the existing theories, including the human elements of Laske’s (2008) framework, the adult extensions to Piaget’s (1932) work, the removal of scales from development in general, the movement between stages as they occur, rather than by description, and the use of heuristics as unconscious shortcuts for self-construction, which ultimately impacts personality.
The starting point for the literature review is a necessary examination of the fundamental theory of Stage Development that underpins this thesis and upon which the research question depends. Thus, in the first section, Stage Development is discussed as the foundation for adult thinking, stemming from Piaget’s (1932) work on how a child constructs a mental model of the world as a process for cognitive development. Piaget maintained that cognitive development stems from independent exploration in which children construct knowledge on their own. This overlaps with CDT. It is important to include Vygotsky’s (1978) work in this section as he opposed Piaget by emphasising the fundamental role of social interaction in the development of cognition, focusing on Social Development Theory, which also intersects with CDT. Commons, Richards & Armon (1984), suggested that stage is a property of subject behaviour or response separable from performance. This will be discussed from a CDT perspective.
Having identified and discussed the leading theories on Stage Development, it is then necessary to answer the natural question: how does one transition from a lower stage to the next? To answer this question, the second section of the literature review addressed existing research into Stage Transition. The review identifies that there is a gap in the theories, specifically regarding the determinant factors of the movement between stages, which CDT can provide an answer to. This leads to the questions around an individual’s intention and awareness in the moment and if movement between stages can be a choice.
The general mechanisms underlying learning and problem solving depend on developmental processes, and as cognitive complexity increases through stage progression, an intention and awareness gap emerges for CDT. This invites discussion about intellect, hence the third section of the literature review.
The third section addresses theories on intelligence and how the theories on Stage Development and Stage Transition could potentially impact on intelligence. This section examines the connection between complexity stages and how ‘smart’ a person is (Kaufman, 2015). In this section, a number of intelligence theories are explored as alternatives to self-awareness with the literature focused on reasoning and problem-solving. This is an important section as it illustrates that one’s cognitive complexity comes before one’s perceived level of ‘smart’. The questions being asked will be: how does Multiple Intelligence (Gardner, 1983) challenge ‘g’ (Spearman, 1961) and can someone with high Dynamic Intelligence do both at choice regardless of theory?
Students are required to self-regulate and self-reflect, a process described and aided by metacognition as it allows them to understand their process of knowledge construction (Touw, Meijer, and Wubbels, 2015). As the literature review focuses on post-graduate students initially, the move from children’s metacognition to adult metacognition is addressed and the gap widened for CDT to insert its hypothesis. The argument between domain-general and domain-specific thinking is addressed, with an alternate approach discussed. The way in which adults utilise the shortcuts mentioned in the first section is continued, which opens the door for the final section.
The final section focuses on the foundations of meta-programmes and from whence they came, the obvious gaps in their construction and the lack of literature supporting them as a human endeavour. However, as there are humans to perceive them, they must therefore exist (Feldman-Barrett, 2017) and this position is exploited in the literature review to such an extent as these ideas are reframed, renamed and repurposed to better suit the purposes of CDT, within which they are more useful. These are then linked to Piaget’s (1952) schemata by virtue of their unconscious heuristic intention and placed into the adult cognition arena.
Finally, the overarching themes are evaluated, leading to the suggestion for the most suitable research methodologies in chapter 2.
The problem of defining a stage and a stage sequence continues to be an important issue in developmental theory. Psychologists such as Piaget (1972), Flavell and Wohlwill (1969), Kohlberg (1969, 1981, 1984), Flavell (1971, 1972, 1976, 1982), Bickhard (1978, 1979), and Campbell and Richie (1983) have devoted substantial academic effort to it. This defining was important for Kohlberg and Armon (1984), and Commons and Richards (1984a, 1984b). Kohlberg and Armon’s objective was to differentiate between hard and soft stages. Soft stage refers to development gained in response to an individual’s experiences arising from any number of factors, such as differences in personality, age, class, education and so on. Hard stage refers to developmental sequences that are said to arise out of an underlying intellectual framework. Hendry & Kloep (2007) suggest that development is domain specific and that a person might develop faster in one domain than another. However, life stage theories assume global patterns of development and Arnett (2007) suggests that both are possible, as they are formed at different levels of abstraction.
Groups of adult development researchers known as neo-Piagetian theorists expanded Piaget’s work in investigating cognitive development in adulthood (beyond age 25) and provided evidence for up to four stages beyond Piaget’s formal operations stage. The stages are commonly known as post formal or post conventional stages (cf., Commons et al., 1984; Cook-Greuter, 1999; Kegan, 1982, 1994). Although Piaget focused his investigations on cognitive development, he recognised the importance of emotion as a central aspect of all activity and acknowledged that emotion and cognition function as interdependent systems (Basseches and Mascolo, 2009). This line of reasoning was followed by Cook-Greuter (1999), Kegan (1982), Kohlberg and Armon (1984), Loevinger (1976), Torbert (2004), and others, who broadened Piaget’s cognitive focus to include social-emotional, affective, and moral aspects under the umbrella of constructive-development theory. Here one can also support the work of Vygotsky as Shotter (2000) comments there is a connection between social interaction, emotion, and applying value to knowledge. In other words, the individual can control their thinking according to value systems. How clarity for the value system may be enhanced by emotion and connection with others on a social level will carry over to the depth and richness of relationships as these relationships progress over time.
To apply Vygotskian theory to the framework for aspects of adult thinking also implies the need to explore how thinking influences behaving in context, such as post-graduate study. However, if thinking is considered a complex (yet simple action) biological function of every human, one will see there are different types of thinking processes, including that of reactionary and emotional intelligence. However, there is also the concept with Piaget and Vygotsky’s sets of theories that also support the activeness of knowledge and that each individual will progress over time to reach a certain capacity of intelligence if they allow themselves to have an active role in the process (Byrnes, 2003). Some individuals may be stalled in their thinking complexity because they are not active and also may not realise that they are not active. This also implies there is a connection between the biological action of thinking and applying this action to learning about the environment.
Another group, led by Commons and colleagues (Commons, Trudeau, Stein, Richards, and Krause, 1998; Commons and Richards, 1984, 2003; Commons and Pekker, 2008), developed a general theory of behavioral development focused on the content-free structure of task performances (i.e., irrespective of content such as emotional, cognitive, moral, or motor skills) and produced substantial research on post formal development.
The following section contains an overview of various theories following Piaget. At the risk of oversimplification, they are grouped into two categories: post conventional (or constructive-developmental) theories and post formal cognitive development theories. Cook-Greuter (1999) explained the two groups as follows:
Post conventional theories emphasize contextual and process-oriented forms of knowing, and increasingly turn attention to people’s inner life. They explore meaning making not only in terms of its mechanics, but also in terms of its human valence and experience. Some theories distinguish between understanding what is merely rationally defensible and logically consistent from what is perceived as meaningful or wise in mature living.
. . . To underline this distinction, I prefer to restrict the term “post formal” to theories of cognition that describe more complex, higher-order forms of logical analysis and reasoning (Commons and Richards, 1984; Fischer, 1980; Kohlberg, 1984), and to favor the term “post conventional” for theories that also deal with issues of meaning, value and experiential salience. (p. 30)
This is not to suggest that all the theories mentioned in this section fall neatly into one category or the other, but such delineation helps position this thesis within the dialectical tradition as an extension of the post formal cognitive line of research.
The work of Piaget (1954, 1976) had two major thrusts: constructivism and stage theory. He based his work on children’s development as they construct a view of the world. He used propositional logic as a model of formal operations, building on work by Gottlob (1950) and Peano (1894) who attempted to generalise propositional logic in order to represent, most notably, arithmetic reasoning. Basseches (1984) and Kegan (1982) see the model as complex constructions of the world, where the child has the capacity to build upon this world. Piaget, at the core of his work conceptualises cognitive development as an extension of the biological process of adaptation (Dodonov & Dodonova, 2011). The child can approach the environment and the organisational capacity to process information upon his or her ability to reconcile the details. Piaget sees the four levels as: schemata or the plan, the imprint of strategy to organise where the knowledge is stored to control adaptation. In this way, the knowledge works to assimilate and accommodate in a way that also serves to further deepen understanding of the environment which leads to equilibrium. Shayer (1997) notes that “Piaget’s own model of adaption, being the result of the dialectic of assimilation and accommodation, does seem to contain the notion that it is only the child’s own efforts that are the process of accommodation.” (p.35). It should be noted that Piaget did not specify if the child developed from a position of awareness, just that their knowledge grows accordingly. His model, when referring to his concepts, does not allow for the differentiation between ‘spontaneous’ and ‘lack of conscious awareness’ (Ashford & leCroy, 2009).
Piaget proposed a complex theory of assimilation, accommodation, and autoregulation that relied on the postformal operational level being true. If Piaget’s own system operates above formal operations, using higher-level logic, then it is higher level logic that is of use to psychologists trying to determine adult stages of development. In the framework proposed by Piaget (1971), any transformation of the existing cognitive structures had to be regarded as accommodation, which is how he defined how one alters existing schemata as a result of new information.
The ability to adapt and grow within the environment also points to how such a model can apply to exploring thinking and development in adults which includes post-graduate students. Piaget seeing intelligence as non-fixed suggests we have the ability to adapt and develop concepts where new schemata are formed, and contribute to shifting the adaptation, where for a moment of progress, the individual may acknowledge feeling imbalanced. The question arises of how aware an adult might be of this development of new schemata and how in control they are of their intention in the process.
At the adult level, further review of how the concepts work also supports thinking in terms of a dynamic process. Allowing the information as symbols and objects to break down and associate with fixed concepts within the schema also promotes actively modifying on a constant basis (in the moment) where the individual can increase balance of knowledge as they have flexibility to accept new objects while retaining the foundation they started as a child (Basseches and Mascolo, 2009; Piaget, 1952, 1964, 1972).
Whilst intelligence can be increased, many of the tools for assimilation and adaptation of the environment are formed early in childhood, and therefore will impact the adult mind later (Carey, Zaitchik & Bascandziev, 2015). Piaget’s four major stages (sensorimotor, preoperational, concrete-operational, and formal operational), take shape between infancy and adolescence (Piaget & Inhelder, 1969). He states that these stages are hierarchical and irreversible. However, more recent research has revealed that cognitive development is not necessarily a set of discrete stages, and actually occurs uniformly across all domains of thinking, and is highly dependent upon individual experience as a contributor (Sinnott, 2010). Burton (2003) states, in his book, State of Equilibrium, that the original seven cognitive-perceptual styles identified by Piaget play a crucial role in the shaping of our early personality, and the basic operating during the first seven years of life are the essential ingredients of most [of our] problem states. He goes on to say that these states are primary, and they function at the effect of these early cognitive intentions.
Piaget postulated each stage creates a hierarchical sequence with each subsequent stage integrating the previous stage’s structure into a higher and more differentiated form or the process cannot continue (Lourenço, 2016). However, he is remiss in his definition of how the child or learner is taking away from the environment information they deem relevant and acting upon it (Halpenny and Pattersen, 2013). Yet in the action, the child has control over shaping the experience as per his or her needs and is able to gain knowledge through the understanding of such constructs. There is thus a level of social interaction and engagement in the process where there is a time component to developing the knowledge. This is an important distinction of a child’s capacity to perform dialectical thinking (Basseches, 1984) yet it is not clearly defined in Piaget’s explanation.
In his earlier theories, Piaget saw an issue with the level of progress that can be made after a certain age of maturity. He would revise this perspective later but did not offer reasoning toward determining how cognitive change impacts the adult’s ability to think (Cartwright, Galupo, Tyree, and Jennings, 2009; Piaget, 1972) and therefore this is a limitation in his theory. Piaget (1987) said that development never ends, even after the attainment of formal operations, however, he missed the opportunity to investigate the developmental processes of adults, as different to children and adolescents, by excluding more mature thinkers. This mantle was picked up by Kegan (1982) who demonstrated with his Levels of Adult Development theory that humans are capable of continued cognitive growth well beyond 80 years of age. Laske (2009) went one step further and demonstrated the difference between cognitive and social-emotional growth in mature adults, by stating that according to his Constructive Development Framework, maturation is possible beyond the age of 25, but the highest levels of cognitive complexity are not available to those under 40 years of age as they lack the experience necessary to make the requisite connections (Laske, 2007).
Piaget suggested a duality of transitional steps. To describe transition, this model elaborated on and systemised the dialectical strategies described in the Piagetian probabilistic transition model (Flavell, 1963, 1971). The systemisation of the sub-steps is based on choice theory and signal detection (Richards & Commons, 1990). Although each task can be broken down by a myriad of subtasks (Overton, 1990), the following simple example has three subtasks: A, B, and C. Piaget conceived stage transition as such:
A is an action from the initial stage. B is a complementary action or the negation of A (Not-A). For example, in a mathematics class, A might be an addition task and B might be multiplication. So, when presented with the problem 2 × (3 + 4), the steps would be to assert A, assert B, alternate A and B depending on the situation, and finally, coordinate A with B.
However, Piaget missed the potential for the task to be ‘thinking’. If one considers thinking the task, what are then the components ‘A’ and ‘B’? One might assume at this stage for them to be a shortcut in a child’s thinking. Piaget also ignores the intention of the action ‘B’. What is the purpose of doing B for the individual? If it is an instruction from a teacher, then the intention is not necessarily to grow beyond the stage, but to do as one is told. The transition is in itself, an intention, albeit an unconscious one, and it could be argued that directed growth is extrinsic, not intrinsic, and as such, not growth, but tuition. This raises the question of the awareness of the child to make the changes necessary, and their conscious direction of intention to grow in context.
It has been argued that what really underpins development is that by virtue of the fact that children age, they increase their processing capacity, such as memory or attention (see Keating, 1990). It is not so much changes in formal logical skills as it is ongoing neural development (Byrnes, 2003). However, it is not clear if this physical growth helps to create intention and awareness within the child’s thinking, or if it is still relatively out of awareness.
In a study on vertical decalage (a child using the same cognitive function in different stages across development) Redpath and Rogers (1982) determined that older children are more capable than younger children due to the greater experience in their years, which seems an obvious outcome. This is especially true if the child has direct experience of the area that is being tested. Principally, the development of the prefrontal cortex was key to the capacity and capability of the children to develop a more dynamic awareness of the subject. It seems unsurprising that Piagetian development theory might be closely aligned with changes in the physical brain (Bolton & Hattie, 2017) in particular, the prefrontal cortex and its associated connections. This raises a question of developmental needs of children as a guide rather than a steadfast method of increasing capacity if one’s brain is going to grow due to aging regardless of any academic intervention. The natural question arising from this would be to ask whether cognitive intentions are the result of experience rather than knowledge acquisition or downloading, and are they a specific entity in their own right?
It is evident that Piaget’s theory enjoys widespread support (Brainerd, 1978; Lourenço & Machado, 1996), however, it also has a number of weaknesses: his inability to separate memory from logic (Bryant & Trabasso, 1971); the assumption that children exist at only one stage at a time (Case, 1992; Flavell, 1993); and the lack of acknowledgment of the impact of cultural context (Dasen, 1975; Dasen & Heron, 1981; Mishra, 1997; Price-Williams, 1981; Price-Williams, Gordon, & Ramirez, 1969).
Piaget was primarily an epistemologist and was concerned with the emergence of new ways of knowing (1950) and how they became necessary once they were constructed (1978). Over the decades, Piaget’s theory has come under criticism due to limiting how the highest cognitive stages of development are defined. Kohlberg (1973) comments:
the issue with Piaget’s work as a model for cognitive development essentially leaves adult capacity for thinking out of the equation if ‘formal operations’ were the highest stage attainable. (PAGE NUMBER)
The implication that thinking stops at a certain age and a certain stage was refuted throughout the 1950’s and 60’s. The argument remains, according to Kohlberg (1973), Loevinger (1976) and Perry (1970), Piaget’s theory fails to support higher ranges of capacity for adult development, of higher thinking and learning practices because the theory fails to define higher order adult cognitive development. Piaget’s lack of attention to the full capacity and capability of adults limits his theory according to Commons, Richards & Armon (1984) and leaves the door open for questions based around a child’s and adult’s intention in the moment, their awareness of this intention and how they might choose to respond in context.
Today, there is general consensus that mental growth continues in adulthood (Horn, 1982, Schaie, 1996) and that cognition is not the only facet of mental development (Irwin & Sheese, 1989; Laske, 2009). Piaget’s own critique of his work centres around ‘novelty’, where he wrote: “For me, the real problem is novelties, how they are possible and how they are formed” (Piaget, 1971). This raises more questions: what is the person looking for? Are they sorting for sameness or difference? It would appear that Piaget is suggesting that a child with a need to sort for difference will notice novelty rather easily, which could be interpreted as an intention in the moment, whether conscious or unconscious.
Basseches (1984) sought to explain how the formal operation stage narrows thinking to only a specific range of problems and thus cannot be equated with epistemological maturity (p45). However, in this way Piaget’s model remains subordinate to other models of cognitive processing as there is evidence to support reasoning capacity intensifies in adulthood (Kegan, 1994). Simply put, adults think more in highly complex ways than children and this cannot be left out of (hard) stage development. In addition, the complexity of children’s thinking and adult’s thinking might be different from an experiential perspective, and it can be argued that a ten-year-old can think complexly for a ten-year-old, provided we do not compare his thinking to a forty-year-old. Both Karmiloff-Smith (1992) and Bruner (1960) agree with this idea and believed that a child of any age is capable of understanding complex information and using a child’s supposed stage of development as a guide to their academic capacity was misguided. This is potentially a limiting factor as it is dependent upon the teacher’s perception of the child’s ability to grow and progress to a higher state of cognitive maturity without understanding the level of awareness the child has in the process.
In Brainerd’s (1978) words: “… whereas Piaget’s [developmental] stages are perfectly accepted as descriptions of behaviour, they have no status as explanatory constructs.” (p.173). For Piaget “…we never attain a measure of comprehension in a pure state, but always a measure of comprehension relative to a given problem and a given material.” (Piaget & Szeminska, 1980, p. 193). However, the logical progression from these ideas could be to understand the intention that drives the behaviour in order to explain the constructs in context.
Contemporary research validates Piaget’s position in that cognitive development progresses during the adolescent years, where adolescents show improved capabilities in inductive and deductive inferences, objective and mathematical thinking and decision making (Byrnes, 2003), however, they disagree with him about the ‘how’ (Byrnes, 2003; Klaczynski, 2000) as the research does not support the Piagetian assumption of domain-general transitions (Csapó, 1997). The point being, if Piagetian development were taking place, children would improve across all domains, whereas teenagers tend to function better in cognitive tasks where they have an existing knowledge (Byrnes, 2003) or have received contextual training [rather than development] designed to improve performance (Iqbal & Shayer, 2000). These unsurprising findings led many researchers to favour domain-specific models of cognitive development. If you expose a child to a variety of cognitive opportunities, of course that child will perform well on a test of the same than if they had never seen it before (Byrnes, 2003). It is possible that what is being tested is actually memory, rather than the process of how to perform the thinking within the test, and as such, the construction of thinking in the moment, is domain-general, which would transcend the above ideas. If one were to test a child on what they can remember of what they have been taught, that is a different test to whether they are cognitively capable of doing the underlying foundational thinking that allows for an answer to emerge, regardless of whether they answer 10 or 50 questions, which is arguably a better measure of the child’s capacity than a static test answer.
Siegler & Crowley, (1991) theorised that there is extreme variability at all times at all levels [of thinking]. They state that a child will have several strategies available at any age that she can use to figure things out. If we think about this concept as a CDT process, it becomes apparent that whether a child has one strategy or ten strategies, their ability to pull in multiple factors in order to decide which is the most appropriate in the moment is a facet of their capacity to think, not necessarily a metacognitive strategy. Siegler & Jenkins (1989) also noted that a child will fall back on older strategies (e.g. formal back to concrete) should the cognitive load become too demanding. According to the neo-Piagetian theorists Pascual-Leone and Johnson (2017), endogenous quantifiable changes in one’s mental capacity, not associative learning mechanisms are the main cause of developmental change.
Those adults who transcend formal operations and use it within a ‘higher’ system of operation, are evidence for the use of reasoning at a more complex level than formal operations (Commons & Pekker, 2008), and thus beyond the capacity of Piaget’s theory.
In summary, from an adult development perspective, there is a need to go beyond Piaget and to consider thinking as the task, especially from the perspective of how development specifically occurs and if people are aware of the process of growth. Kohlberg (1984) said:
“The strict Piagetian stage construction may need to be abandoned in the study of adult development, but the idea of soft stages of development in adulthood should not be. … Soft stage models present a new way of doing research in the subject area of adult development, a way that has emerged from the Piagetian paradigm.”
In contrast to Piaget’s understanding of child development, in which development essentially leads to learning, Vygotsky felt that social learning preceded development.
Vygotsky’s (1978) stage development considers how the adult will seek processes for problem solving, as well as collaboration with others as peers.
The complexity and arbitrary nature of social interaction, along with trait, personality, self-identity, the ego and attitude toward thinking and learning will also influence one’s adult capacity for intellect. Without precise comprehension of the theories within development, without the general and broad view, applying the theories becomes difficult (Van der Veer, 1997). From a domain-specific perspective, in Mind in Society, Vygotsky argued that:
The mind is not a complex network of general capabilities such as observation, attention, memory, judgment, and so forth, but a set of specific capabilities, each of which is, to some extent, independent of others and is developed independently. Learning is more than the acquisition of the ability to think; it is the acquisition of many specialised abilities for thinking about a variety of things. Learning does not alter our ability to focus attention but rather develops various abilities to focus attention on a variety of things. (1978: p83)
Vygotsky (1978) subordinated development to learning, thus advocating a weak conception of development. For Vygotsky, (1987) seeking ways to provide evidence of environmental interaction having a direct impact upon learning and intellect also suggests a connection between speech and the ability to form coherent thoughts. The dynamics between speaking and language where there is flow of one thought to the verbal output also points to how thoughts take shape. There is movement from thought to speech, and from speech to thought (Vygotsky, 1987, pp. 249-250).
The notion of multiple contexts and complexity translates not to dualism as Piaget believed, but to the sense of space that requires a degree of social support and collaboration between peers. The concept of multiple contexts and complexity expands upon how one can approach scaffolding in terms of adapting and acting upon the thought process (Copple and Bredekamp, 2009). For many classroom environments, post-graduate study included, this is task-based by the teacher to form the parameters of the activity. Siler (2011) sees scaffolding as an assessment of the learner’s current knowledge and experience, which can be related to the course content to determine what they understand and can do already. Siler also uses scaffolding in the form of verbal cues and prompts to assist students and the breakdown of tasks into smaller tasks with the opportunity to feed back in process.
In the home environment, it was noted that parents who talk about and explain their use of emotions to their children facilitate a child’s development of emotional abilities (Cassidy et al., 1992; Denham et al., 1997). Focusing on the child’s academic development, Vygotsky introduced the concept of the ‘Zone of Proximal Development’ (ZPD) which he defined as: the ability of the child to learn only when interacting with people in their environment and in cooperation with their peers. Once these processes are internalised, they become part of the child’s independent developmental realisation (Vygotsky, 1978. p90).
The Zone of Proximal Development is the difference between what a learner can do without help, and what they can accomplish with directed guidance. Bruner (1957) coined the term ‘scaffolding’ in which he explains the learner actively constructs new knowledge based on existing knowledge, as well as their interaction with the environment. Scaffolding is seen as a learning framework from which as a thinker, one seeks ways to apply knowledge to suggest development within an environment that values such relationships. The concept of scaffolding builds on the idea of teacher-led development in the ZPD (Wood, Bruner, & Ross, 1976) to stretch students just beyond their independent ability (Hannafin, Land and Oliver, 1999). Here, it cannot be ignored how the social aspect of learning also supports a person’s ability to build upon their capacity toward acquiring more ‘knowledge’. It is thus only through guided learning and subsequent introspection that an adult can develop the tools to expand their way of thinking (McLeod, 2012). However, the process of this movement is not defined clearly as to what is actually taking place in the moment.
Piaget said the child is a scientist. Vygotsky said the child is an apprentice. They also offered opposing views on how they saw private speech developing and the environmental reasons for when and how often it occurs (Berk and Garvin, 1984). Vygotsky (1987) proposed that inner speech is a product of an individual’s [interaction with their] social environment.
From a cognitive perspective, there have been many studies on what makes a good learner, with the study from Modrek et al, (2019) looking at the differences between cognitive and behavioural regulation as potential predictors of individual differences in early teenage children. The results demonstrated that cognitive regulation, not behavioural regulation was associated with more successful inquiry learning.
With a modern lens, this principle would appear unsurprising as thought precedes behaviour, however, as to whether this transfers to the adult learner later in life, postformal stages are important as they might account for academic performance, including the effects of culture on social, political, and educational development (Commons, et al., 2007; Commons & Rodriguez, 1993). Because education is a good predictor of developmental stage (Commons & Ball, 1999) it is arguable that finding out the reasons why would benefit adult post-graduate students.
Researchers have considered the connections between vertical development and effective learning outcomes for postgraduate students (Bartone, Snook, Forsythe, Lewis, & Bullis, 2007; Hart & Mentkowski, 1994; Lasser & Snarey, 1989; Manners, Durkin, & Nesdale, 2004; McCauley, Drath, Palus, O’Connor, & Baker, 2006; Tanner, 2006). Lasser and Snarey (1989) found that female students who were more vertically developed were far more adaptable when moving to university than less vertically developed female students. Further to this, in a longitudinal study of military cadets, Bartone et al. (2007) found that 47% of cadets studied over a 4-year period at college grew in their vertical development. What this represents is the difference between developmental levels in a specific context, which could be considered a soft-stage approach.
Loevinger (1976) was one of the first researchers to focus on the problems and experiences of women in post-war America. She drew on Sullivan’s (1968) description of levels of interpersonal maturity to create her own system of ego development based on eight rather than four sequential stages. Each stage represents a level of complexly perceiving one’s relationship to the world. Loevinger used psychometrics to validate her work, which separated her from her predecessors.
Soft stages are often characterised as self-reflective stages, involving an ego that makes an existential meaning of and for itself (Reams, 2014). Loevinger’s model describes personality in terms of cognitive, affective and behavioural components, and assumes that “all humans evolve toward greater complexity, coherence and integration” (Cook-Greuter, 1999). Hard stage developmental psychologists have issues with this statement, such as Laske (2015) who states that his stage 2 people are actually incapable of this kind of growth as they lack the requisite complexity.
Loevinger considers the ego to be a process as opposed to an organising function. Loevinger created a tool to measure the differences in how people responded to a set of sentence stems, which she called the Washington University Sentence completion Test (WUSCT), which she stated empirically measured the participant’s ego development (Hy & Loevinger, 1996; Loevinger & Wessler, 1970).
The WUSCT assesses the written answers to sentences that begin with an incomplete question or statement, after which a participant finishes the sentence. Cook-Greuter (1999) noted that the WUSCT measured performance, whereas Kohlberg’s model measures competence. Loevinger also recognises the importance of language in any assessment, and stated: “the centrality of language or verbal behaviour as a medium through which we manifest our conception of reality is the basis of any verbal projective test”. Further to this, both Kegan and Laske investigate the ways in which we make meaning and thus how this meaning manifests in our thinking to determine a level of social-emotional and cognitive complexity respectively.
The concern with language as a method of assessment of cognitive capacity is illustrated by Cook-Greuter (1999) as the dependence on articulation being the only means of measuring the individual, is prone to extreme data not being taken into account, as well as novel data being ignored. A second issue with Loevinger’s approach is that within the written response, there is no allowance for examination and explanation of meaning. A criticism is that Loevinger confuses content with structure, and as such, fails to answer the question as to why one stage is higher or more mature than another (Reams, 2014).
From a pedagogical perspective, Loevinger’s model helps us to understand the way emotion motivates learning, and can have an impact on unresolved emotional issues which creates the need for safe environments in academia, where students are supported and scaffolded whilst dealing with emotional issues (McClure, 2005).
Hy & Loevinger (1996: p6) used the term ‘transmuted’ to describe the change which takes place in a person’s meaning-making capability as they move from the conventional tier of ego development to the qualitatively different level of postconventional ego maturity. Transmutation means the action of changing into another form completely, indicating that growth from conventional to postconventional levels of development entails a departure from one form of meaning-making structure in order to arrive at a completely different form: a qualitatively higher form. This is a common theme throughout stage development.
The general consensus amongst stage theorists is that there are inherent difficulties in the longitudinally tracking of meaning-making transmutation over the course of the development of a person (student or leader), so studies instead rely on comparing and contrasting lived experiences of both conventional and postconventional level leaders, and mapping the difference (Harney, 2018). However, what is missing is the measure of the potential underlying intention to change or grow in this model.
Another criticism of Loevinger’s Sentence Completion Task comes from Commons et al. (1989) who mapped their Multisystems Task instrument against a number of other systems that determined complexity. Loevinger’s WUSCT was the only one not to correlate with the postformal stages after factor analysis. King and Kitchener (1994) achieved similar results. Essentially, the argument is that although soft stages do appear to have qualitative separation, they are derived somewhat less directly (Reams, 2014).
Loevinger’s ego development model states that personality structures can evolve (Dweck, 2011) and that this is affected by social dimensions, such as environment, in line with Vygotsky. Environment is key to understanding how a person constructs themselves in the moment, determines their unconscious intention and responds accordingly as individuals are capable of constructing themselves differently depending on the context (Cook-Greuter, 2010). In one study by Adams & Fitch (1982) on change in identity status and ego development over a one-year period, they found that 61% of the students remained stable, whereas 22% progressed and 17% regressed. However, a recurring criticism is that the study, and subsequent studies by Redmore (1983) all had student participants, thus reducing the scope of the outcome. This is key to understanding the limitations of psychological studies that only use students as their participants, as this is not typical of the population as a whole (Peterson & Merunka, 2014).
Finally, it could be argued that Loevinger’s stages omit the structures of thinking awareness that informs the process of growth. From an intention and awareness perspective, because the relationship between underlying structures and behaviours is complex, this makes it difficult to predict behaviours in terms of construct validity (Broughton, 1978). It is not expected that a person’s ‘self-esteem’ increases with increased ego development, as the two things are not correlated (Pazy, 1985). However, it could be argued that ‘self-esteem’ is a nominalisation, which means that it does not exist in its own right: it has to be constructed, and it has to be constructed from a position of external validation. This was explored further by Cook-Greuter.
Another Neo-Piagetian developmental psychologist of note is an independent scholar named Cook-Greuter (1999, 2010) who noted that the last two stages of Loevinger’s system were not adequately differentiated, and whom could illustrate distinctions within the final stage of Leovinger’s model. Kohlberg & Armon (1984) also noted the same. Loevinger’s observations led to enhancements in the sentence completion test (1999) and collaboration with Torbert (2004) enabled them to develop the Leadership Development Framework and the Leadership Maturity Profile, which examines stages of ego development within organisations.
Constructive developmental theories differentiate between content and structure. Structure looks at the way a person frames their awareness of their responses to life’s tribulations. Content refers to the ‘what’ of an individual’s conversation. What are they saying rather than how are they saying it. However, Cook-Greuter does not demonstrate how her participants knew they were aware, or if indeed they had awareness of their awareness. She took her research motivation from Loevinger’s stage theory of ego development and expanded up on it. Loevinger (1979) provided a framework for conceptualising the growth in an individual’s way of constructing meaning through their life time from her empirical research using a sentence completion test. From a constructivist perspective, what changes is the relationship to one’s individual life challenges based on our awareness of this relationship. Some have maintained that Loevinger’s model suffers from a lack of clinical grounding and that like Kohlberg’s theory, it confuses content and structure, as it is a more pseudo-structural than a true structural stage theory (Blasi, et al., 1998; Loevinger, 1991, 1993).
Cook-Greuter’s stages of most interest are the post-conventional stages, namely: Achiever stage, Individualistic stage, Autonomous stage and Integrated stage. Cook-Greuter (2013) goes into detail in her description of the capabilities of each stage and how they progress and differ from the previous stage. She states that as development occurs in logical sequences from birth, the majority of adults seldom grow higher than the Achiever stage. This is below the maximum potential for adult development, which makes them unable to cope with the complexity of the adaptive challenges of modernity (Cohn, 1998; Cook-Greuter, 2004). This is consistent with Laske’s (2007) theory where the majority of the population (55%) is at the socialised-mind stage of development, which is his stage 3. However, the process and the what of the change is omitted.
Cook-Greuter likens the movement to a spiral, hence the connection in her literature to Spiral Dynamics by Wilber (2013). Furthermore, she goes on to say that world views evolve from simple to complex, from static to dynamic and from ego-centric to socio-centric, and finally to world-centric (2013). This aligns her theory with all previous adult development specialists such as Wilber, Kegan and Laske. In a similar fashion to Laske, Cook-Greuter states that each stage progression incorporates the previous stage and all facets of that stage are available to the person.
Cook-Greuter’s (2013) model states that its highest-level thinkers can differentiate self from culture. They recognise what separates and unites humanity, and this dualistic perspective is a paradigm shift from her lower levels’ thinking. She states that it is a difficult and painful emotional transition to realise one is disconnected from the general population. Thus, at an emotional level, the realisation that nothing is separate, paradoxically separates and isolates these high-level thinkers from the mainstream of humanity. However, Laske (2008) argues that if one is allowing emotion to control one’s thinking and feeling outcomes, then they have not yet moved through emotion into cognition to become a high-level thinker. Emotion is limiting, as stated above, and Cook-Greuter mixes up emotion with high level, abstract conceptualisations of what it is to be separate and inseparable from humanity. As she focuses on the construction of meaning via the ego development, she is not actually focusing on the construction of self in the moment. Cook-Greuter’s Construct-Aware stage thus represents people who are aware that all meaning is constructed and as such, she neglects to demonstrate if there is a transition from intention via awareness through choice and into response.
Cook-Greuter’s (1994) idea that ego stage transition is possible if the appropriate life experiences were structurally dis-equilibrating, ties in with Piaget’s ideas of the same area. In essence, it can be thought of as ‘disruptive thinking’ that moves a person from one thinking stage to the next, as without the ability to disrupt the current patterns, a person does not move at all. Although there have been many studies that have promoted adult ego development within (Alexander et al., 1990; MacPhail, 1980; White, 1985), only the study by White aligned with Cook-Greuter’s advanced ego stages for some participants. However, it was unclear what triggered the stage transitions, as no control group was employed, and the intervention consisted of a nurse-training program conducted over a two-year period (Manners, Durkin, and Nesdale, 2004). Therefore, it would be interesting to learn what is actually taking place as people move between stages as this is currently missing from Loevinger’s and Cook-Greuter’s models.
Dynamic Skills are not Developmental
Fischer (1980) examined how the learning environment impacts the optimum level of skill in cognitive development. A skill can be taught, and thus passed on to a student from a more knowledgeable other. Development, on the other hand, is not taught, but guided. A disruptive facilitator is more appropriate to guide a student’s thinking vertically, whereas the language used by Fischer advocates development can be gained through skill acquisition.
Because of this limitation in language, Fischer mirrored Piaget and Case when he theorised that there were four stages with a recurring pattern of advancement through each, and that a child’s experience across domains would account for their growth.
For Fischer and Granott, (1995) adult cognitive development moves in a multitude of directions. It forms a dynamic web, with each strand also being dynamic (and thus fractal), rather than linear. For Fischer, developmental change is defined: “in terms of structural transformation in patterns of thinking, feeling and action within particular domains and context” (Mascolo & Fischer, 2010). The nature of this transformation is seen as a set of rules, which provide micro-developmental processes, or within-level descriptors which he states: “specify how a skill is transformed into a new, more advanced skill” (Fischer, 1980). Fischer lists these micro-level skills as: inter-coordination; compounding; focusing; substation and differentiation. Inter-coordination is the macro-development transformation to which the others lead and contribute. However, Fischer does not make note of the child’s (unconscious) intention in the moment to either transform or not, which will be directly impacted by their capacity to change. Complexity is not a skill to be taught, but a vertical movement that can only be guided (Laske, 2009). It would be interesting to learn how much of a student’s development is within awareness.
Fischer (1980), Fischer, Hand, and Russell (1984), and Sternberg (1984), proposed a number of instruments for cognitive development, that supposedly resulted in postformal thinking. The approach by Fischer (1980) describing the new level of complexity was to use the analogy of unfolding dimensionality, which uses dimensions in space to illustrate the idea of the complexity of postformal cognition. Although size might be considered quantitative, dimensional increase in size generates complexities that should be considered qualitative. However, this seems like a complex way of explaining a more simple heuristic.
It is interesting how Fischer differentiates how a single set is separated into distinct sub-sets. When a person encounters a new task at a level of complexity with which they are already familiar, they can break the task down into subsets from an earlier (or lower) skill level in order to ensure a better performance and thus any arbitrary skill can be automated if it is practised often enough, within expected conditions (Posner & Snyder, 1975; Shiffrin & Schneider, 1977) which renders the skill an unconscious process once begun (called automaticity).
Fischer’s research on compounding, where two or more skills at the same level of complexity are combined is interesting from an experimental perspective. Even finer detail is the focus on moment-to-moment behaviour in Fischer’s model, which in the wider psychology is understood as attention (Reams, 2014).
According to Bolton & Hattie, (2017), time is a more precise predictor of academic achievement than intelligence and IQ. Despite a large variety of definitions of what constitutes Executive Function (EF) (Jurado & Rosselli, 2007), and the inherent difficulty in their measurement (Miyake et al., 2000), changes in EF contribute to academic achievement rather than the reverse (Best, Miller, & Naglieri, 2011; Bull, Espy, & Wiebe, 2008; George & Greenfield, 2005; Towse, Hitch, & Hutton, 2001; Miller & Hinshaw, 2010). Although executive function increases throughout our school years, it gradually decreases from 16 to 30. (Best et al., 2011; Blair & Diamond, 2008; Blair & Razza, 2007; Davidson, Amso, Anderson, & Diamond, 2006; Huizinga et al., 2006; Somsen, 2007; van der Sluis, de Jong, & van der Leij, 2007). It would be interesting to note if this gradual improvement occurs in combination with the physical development of the brain during childhood, and how this affects thinking into adulthood.
A recognised problem with the domain-specific approach is how one determines what defines a domain. As mentioned, domain-specificity is a function of human cognition, but it is not clear precisely how. The word ‘domain’ has many uses in language: such as in biology, mathematics, politics, and more. Domain implies a systematic relationship of member parts.
All complex concepts are relatively undefinable and ambiguous. There will always be synonymous connections with other concepts. This idea plays an important role in Karmiloff-Smith’s (1992) developmental theory. She states that behavioural competence is acquired through exploration and experiment, regardless of whether or not they receive help. Then, through a process she calls ‘Representational Redescription’, the child is able to think more flexibly and more sophisticated as he recodes the information.
Piaget neglected to differentiate how the child knows what must be learned and when (Fodor, 1980). For example, it could be asked to which factors should the child pay attention within the environment and which can be ignored? One of Karmiloff-Smith’s (1990) key contributions to developmental science was her support for a ‘middle ground’ between nativism and Piagetian constructivism. Nativists argue that genes coordinate the development of cognitive modules (such as language). Karmiloff-Smith (1992) argued that our development produces domain-specific modules based on our direct experience, shaping our neural connectivity. This is contrary to Piaget’s (1971) assimilation vs. accommodation concept mentioned above.
Karmiloff-Smith’s ‘middle ground’ is accepted by the majority of developmental scientists (e.g. Mareschal et al., 2007) as it is consistent with developmental systems theory which states that: “the structure of the adult brain is not predetermined but is gradually constructed from complex cascades of gene–environment interactions” (‘probabilistic epigenesis’; Gottlieb, 1991).
It is essential to understand how internal representations change over developmental time if we are to better understand the internal structure of the adult brain (Karmiloff-Smith, 1986, 1992). It is therefore essential that developmental processes are identified and investigated (D’Souza & Karmiloff-Smith, 2011, 2016). Further to this, if we consider the meaning-making aspects of complex thinking as per Kegan and Laske, any internal representations that change over time will have an element of meaning associated to them that also influences what they mean in context, and as such, any developmental process applied to a 25 year old male is going to be inappropriate for the same male at 45 as his meaning-making will have changed dramatically.
However, there are contradictions to this perspective. For example, development profiles are similar across individuals and across cultures, despite the hypothesis that development is driven by general learning abilities, which seems implausible given the wide variation and quantity in stimuli individuals experience (D’Souza & Karmiloff-Smith, 2011). Furthermore, the wide range of general intelligence would be expected to have an impact on a person’s development over time which is not accounted for in the theory of gradual modularisation. Prinz’s (2006) critique of modulation argues that: “perceptual and linguistic systems rarely exhibit the features characteristic of modularity”, which means that systems are not ‘informationally encapsulated’. Prinz uses perception as an example of cross-modal activity, which detracts from the encapsulation argument at the level of input. If these modular systems are domain-specific, it would be expected that not every module is linked with every other module, and thus limitations of cross-modular content would be experienced. Our ability to combine creative concepts could be a simple addition to the language module, or it might be that the function of pretend play in childhood is to construct and develop this capacity (Carruthers, 2002). It could also be asked whether there is an unconscious intention behind the behaviour. Another issue is that after the formation of cross-modular thought, we can use the new thoughts as premises for reasoning or derive new meaning from them and more (Carruthers, 2006). Perhaps this can be explained in terms of the use of a number of existing modular processes, with minor additions and adjustments. What is interesting is what drives those minor additions and adjustments could be the same unconscious intention mentioned above. Further to this, how would the thinking change should the individual become aware of their modular-intentioned thinking? It could also be asked whether the modules are the name given to the variety of intentions to one’s development over time. From a post-graduate student perspective, one has a variety of academic goals ‘chosen’ by the student as important for that stage of horizontal growth. Thus, it is prudent to ask whether modularity might be the result of experience-based learning rather than its cause, i.e., “Modules are made, not born” (Bates, Bretherton, & Snyder, 1988, p. 284).
In summary, there is a distinct lack of Adult Development research in social sciences (Fein & Jordan, 2016) and the various ways existing developmental psychologists are looking at cognitive growth can be expanded by including Adult Development as a perspective. Karmiloff-Smith (2009 PAGE NUMBER) epitomises the approach to child development and why it is more appropriate to focus on adult development: “numerous studies of development (typical or atypical) are not developmental at all, because studying children by no means guarantees a developmental approach” (Karmiloff-Smith, 1992, 1998). Whereas “one can study adults developmentally” (Cornish, 2008). The truly developmental, neuro-constructivist perspective embraces a developmental way of thinking, regardless of age.
Siegler’s (1996) overlapping waves theory is based on three assumptions:
(a) at any one time, children think in a variety of ways about most phenomena;
(b) these varied ways of thinking compete with each other, not just during brief transition periods but rather over prolonged periods of time; and
(c) cognitive development involves gradual changes in the frequency of these ways of thinking, as well as the introduction of more advanced ways of thinking.
Siegler suggests that the staircase metaphor used by other stage theorists (such as Piaget) missed out the variability between stages of development (Slavin, 2012). Schwartz & Fischer (2005) also found that the trajectory for learning is constructed of waves. Siegler was thus interested in the number of strategies a child might use at any age rather than if a stage matched a particular strategy. Figure 1.2 illustrates the concept. In essence, Siegler is concerned with thinking as a skill. But is it a construct? Finally, Siegler is focused on learning, rather than development.
Figure 1.2: Schematic depiction of Overlapping Waves Theory
Overlapping waves theory, according to Chen & Siegler (2000), distinguishes among five dimensions of learning:
The acquisition of a new strategy for the child, according to Siegler, has to begin somewhere, but the initiator is vague. Mapping existing strategies onto new problems requires the child to differentiate between relevant and irrelevant components of the information compared to where the strategy was originally applied. In Siegler’s model, this can be problematic if a strategy is wrongly applied, or not applied at all. However, Siegler does not offer a deconstruction of his wave formation. Both adults and children often fail to utilise newly acquired strategies, even when they are significantly more effective than existing strategies (Acredolo, et al., 1989; Goldin-Meadow, Alibali, & Church, 1986; Siegler, 1996). Siegler offers an explanation of this problem as an issue with retrieval of the new strategy, and a problem with the suppression of the old strategy.
Experience also plays an important role in a child’s performance. Daily activities for young children are improved by repetition, such as laying out clothing for the morning (Kreitler & Kreitler, 1987). However, the ability to plan further ahead appears to be a learned skill much later, especially when the task is unfamiliar. Rule-following provides a good explanation of a child’s performance in most Piagetian tasks, however, on more complex tasks, their performance is inconsistent. If an adult were to follow rules without thinking around the problem, this would be described by Laske (2015) as Stage 2 thinking and not very cognitively complex! Rule-following is not, however, typical of all problem solving (Siegler & Chen, 2002). It has been shown that experience greatly affects the adult’s capacity to answer such simple questions, and their strategy as described by Siegler’s waves, are subsumed by their cognitive capacity as adults. Thus, their capacity is not required to answer strategic questions.
If, instead, one is asking if adults use the Overlapping Wave theory in the same way as children, then it could be asked whether one can determine an adult’s capacity by a level of awareness of their wave-based strategy.
Kegan (1982, 1994) builds on the foundations of Piaget (1954) and developed his ideas for hierarchical stages, when he created his Subject/Object Theory, where he postulated each hierarchical stage subsumes the previous (Piaget & Inhelder, 1969). His research combined three major intellectual constituents: the existential humanistic work of researchers such as Rogers, Maslow, Buber, and May; the neo-psychoanalytical practices of Freud, Erickson, Winnicott and Bowlby, and the constructivist developmental approach described here.
Development occurs in the pressure between challenge and support. Challenge comes from one’s meaning-making that is insufficient for their environment. Support comes in the form of a secure environment for risk-taking, including making new meaning and actions. Lahey, et al (1988), developed the Subject/Object Interview in order to assess these orders of consciousness. This method engages people in dialogue where the structure of their meaning-making is investigated. This semi-structured interview format was similar to those of Piaget and Kohlberg. Kegan (2003) argues that people move from one order to the next by building a bridge to the next order by constructing meaning in two ways simultaeneously, and once the transition to the next stage is complete, the previous order is incorporated into their new meaning-making method.
However, a criticism of this approach is levelled at Kegan’s lack of feedback after the interviews. He stated it was none of the participant’s business what the outcome was but Cook-Greuter (1999) repudiates this and states it is the duty of the interviewer to feedback the interview findings, or they are doing the participant a disservice as they become interested in the truth about themselves through feedback (Cook-Greuter, 2004). Further to this, it is arguable that there is little point in determining a person’s level of thinking subjectivity if they do not benefit from the results. Other objections levelled at Kegan’s (1982) work suggest that his stages of development, as well as works such as King and Kichener’s (1994) stages of reflective judgement, or Perry’s (1970) scheme of intellectual and ethical development, do not sufficiently differentiate a person’s profile, to discover the foundational thinking of a person’s unique mode (Vurdelja, 2011). Laske (2009) separated out the social-emotional from the cognitive as he suggested this is where Kegan’s work was inadequate.
An interesting perspective from Kegan (1994) was his assertion that his stage four (self-authored stage) is a ‘siren song’ (Eigel & Kuhnert, 2016) for most people and where their development is arrested. A person attaining stage four will effectively become a victim of their own success, and eventually stuck in their own value set. In order to progress to the next stage, the primary motivator for development must be an internal desire to create an enduring legacy (Eigel & Kuhnert, 2016; Bauer et al., 2015; Cloninger, 2013; Bauer, 2011; Davis, 2010). This internal desire or intention must derive from an awareness of either growth or the need to grow one’s thinking.
In terms of development, a flaw and limitation in Piaget’s theory is that it does not allow for a child to skip a step. However, this may explain why some learners remain stalled. Kegan (1982, 1994) sought to expand upon this theory to examine adult learning after 25 years where there are also four stages to development that move beyond Piaget’s scope of formal operations stage. Kegan (1982) argued that as humans develop, they cannot be considered independent from the social environment and suggested there is not much difference between being a person and being a meaning-maker (Laske, 2015). While the focus is not about constructivist views, the concept of meaning making also suggests internal dialogue or projecting the thought, building on Piaget’s and Vygotsky’s work here. Kegan’s point is, every individual does meaning-making whether they have awareness of it or not, otherwise, the thought process halts. It could be asked at this point, if everyone undertakes meaning-making, whether this meaning-making can be interrogated specifically. The common denominator missing in stage development is this area of meaning making, and how it is decided that a person has awareness of how they are constructing it in order to move from one aspect of meaning-making to the next, once a new meaning has been made.
Kegan, (1982) contributes to stage development with his constructive-development theory, where he integrates thought complexity with our meaning-making processing. The principles of social constructionism support how change may alter meaning and posits: “individuals make meaning of their experience of periods of change and stability in their lives and the cognitive development of their meaning-making process proceeds in a systematic, sequential, predictable, and increasingly complex way from childhood and into adulthood” (Wall, 2003, p. 71). Kegan (1982) wrote further:
The heart of the constructive-developmental framework—and the source of its own potential for growth—does not lie so much in its account of stages or sequences of meaning organisations, but in its capacity to illuminate a universal on-going process (call it “meaning-making,” “adaptation,” “equilibration,” or “evolution”) which may very well be the fundamental context of personality development. (p. 264)
Kegan’s (1982) model consists of five stages of consciousness that represent a growing ability in meaning-making, or an ability to step back to gain an increasingly complex perspective of one’s surroundings as well as one’s relatedness to them. The process evolves in the following sequence: in the first order, one perceives and responds by emotion; in the second order, one is motivated solely by one’s desires. The third order signals self-definition as determined by the group; in the fourth order, one becomes self-authoring and self-directed. The last, fifth, order symbolises interpenetration of self-systems (Cook-Greuter, 1990). The model involves the individual developing an increasing capacity for relatedness to and perspective on the self. Constructive-developmental theorists (Cook-Greuter, 1999; Kegan, 1994; Rooke and Torbert, 1998) suggest that only a minority of the adult population (between 10% and 40%) currently operates at level 4, the minimum level of functioning required by modern society. The implications of this are, according to Kegan (ibid. p326) that ‘differentiation precedes integration’ by which he means that a reconstructive postmodernism provides a more flexible means of galvanizing the resources of culture. This allows the higher level individuals to better-meet the mental demands of a modern (and postmodern) life by the act of constructing modernity rather than transcending it (p.337). The path to self-awareness begins after leaving level 3, characterised by “our world hypothesis and internalising others’ perspectives” (Laske, 2006, p. 117), and it represents a milestone in development. “The acquisition of the third-person perspective enables people to see themselves as separate objects. They become conscious of themselves” (Cook-Greuter, 1990, p. 96). Reaching this threshold appears to be a gradual evolutionary process, yet it is the developmental shift that many individuals find the most difficult to make:
The internal experience of developmental change can be distressing. Because it involves the loss of how I am composed, it can also be accompanied by a loss of composure. This is so because in surrendering the balance between self and other through which I have “known” the world, I may experience this as a loss of myself, my fundamental relatedness to the world, and meaning itself. (Kegan, 1982, p. 374)
Ashforth et al., (2008) used the term identicide to describe the dis-equilibrating phenomenon, underscoring the need for safe, holding environments which can facilitate development. On this, Kegan (1982 p232) wrote:
Every transition involves to some extent, the killing off of the old self.
However, from a logical perspective, the above ‘distress’ seems an over-reaction. The main issue being the use of the word ‘composed’. Alternatively, it might be that the individual is adding to their intention, regardless of whether this is in light of a balance of self and other, or in their knowing the world. They are increasing their relating to the world and as such, are adding to their experience. That is to say, it could be argued that none of this could be considered a ‘loss’ if we are adding to our awareness of self.
Just two years after Kegan’s (1982) work on his five-stage model, Kohlberg and Armon (1984) made a significant contribution to developmental studies by extending Piaget’s stages beyond adolescence and into the realm of moral reasoning. They conceptualised the evolution of moral judgments as a progression of six stages distributed evenly within the pre-conventional, conventional, and post-conventional tiers (Kohlberg and Armon, 1984). However, Kohlberg and Armon’s work has been criticised as being male-oriented, culturally biased, and overly focused on individualism. Gilligan (1982) later developed her own line of research on the moral development of women and put more emphasis on interpersonal relationships, compassion, and care. It would seem a logical step that a combination of both approaches would cover all aspects of thinking for both sexes, and offer a better definition of reasoning and meaning-making for both based on shared fundamentals, such as a person’s unconscious intention to be or not be moral, and an awareness to be caring or compassionate.
Adult development is characterised by the shift of subject-object relationship, where what we are subject to becomes object, and in so doing reframes and transforms the way we make meaning. Suffice to say, that meaning making never stops even as there is active transfer from subject to object and back again much like Vygotsky’s use of language to transfer knowledge and comprehension of it as applied to the real world as a tool.
Finally, the question remains as to what Kegan has missed out, and if by omitting one aspect of complex thinking, could there be another omission with as-yet-unseen, but far-reaching consequences?
Developing the Teachers
There exists a large body of research on professional development for teachers, which focuses primarily on what they should know, about the content and nature of learning, and how children learn from a pedagogical perspective (Ball & Cohen, 1999; West & Staub, 2003). However, as this review unfolds, the concern regarding how teachers know what they know (epistemology) might also have a strong impact on a teacher’s development, which will impact the students’ development as teachers untangle complex mathematical ideas (for example) so students can navigate them simply, and thus generate new teaching practices (Fennema et al., 1996; Carpenter, Fennema, & Franke, 1996; Peterson, Fennema, Carpenter, & Loef, 1989; Schifter, 1995). Mathematics is used as an example as operations on numbers must be mastered due to concrete operational thought being the mental background for its understanding in late primary education, and formal operational thought in later teens allows for the conception and manipulation of abstract and multidimensional concepts (Furth & Wachs, 1975).
Originally considered a generic personality characteristic (Bieri, 1955), cognitive complexity was redefined by Schröder, Driver and Streufert (1967) as a domain-specific, information-processing variable, strongly associated with expertise. According to Streufert and Swezey (1986), individuals with greater cognitive complexity perform better at tasks that have a high level of task complexity, as per Jaques (1989) and Commons (1984). They went further and said that it is actually imperative that an organisation aligns a person’s cognitive complexity to the task demands of their role, as there is a positive correlation to efficient outcomes of decision making when done so (Ceci, & Liker, 1986). To balance competing demands, post-graduate students need to possess the capability and capacity to process opposing perspectives without getting cognitive dissonance (Robertson, 2005). Robertson stated that they need to transform the potential conflict into a generative paradox (p.182). Cognitive complexity ‘reflects the ability to recognise and accept the interrelated relationship of underlying tensions. It enables actors to host paradoxical cognitions’ (Smith & Lewis, 2011).
In his book, Measuring Hidden Dimensions of Human Systems: Foundations of Requisite Organisation, Volume 2, Laske (2015) continued and refined Basseches’ (1984) cognitive schemata framework and developed his own cognitive developmental framework. He then contextualised it by applying it to Jaques’ (1989) theory of requisite organisation, and separated it out into 28 forms of thinking. These became Laske’s question-set, which he uses to elicit a person’s unconscious thought forms, and then to access the structure of a person’s thinking through his Cognitive Development Framework (CDF). According to Laske (2008), behaviour is a symptom of one’s developmental level, and can only be explained and examined via this definition. This was based on the premise that the structure of a person’s thinking generates the content (Laske, 2009). Laske’s theory does not take into account an individual’s capacity to respond in the moment based on their level of self-awareness and the resultant choice this awareness offers. Further to this, it could be argued that he also missed out the intention of the thinking and behaving, as our intention drives our attention, which could be limited by the individual’s level according to his theory. Implicit in Laske’s theory is the person’s future capacity and capability, but not their immediate response-ability. Laske discusses a person’s level of development without discussing the construction of that development. When one considers the construction and awareness of a person’s intention, their level of cognitive complexity according to Laske, will drive their cognitive development. It is arguable that this is a stage before their development, or the actual process of transition. Laske states:
I thought that the main issue in teaching CDF-interviewing as a dialogue method would lie in making clear the separation between the focus on “how am I doing” (psychologically) and either “what should I do and for whom?” (social-emotionally) or “what can I know about my options in the world?” (cognitively). This triad of questions for me encapsulates the mental space from within which people deliver work, without ever quite knowing how to separate them in order to reach a synthesis of self-insight. [PAGE NUMBER]
In this way, Laske articulates alternatives to what Cook-Greuter’s theory surmises. It would be useful here to see how Laske, (2011) described the distinctions between stages and their thinking. See Table 1.2. The difference between development and learning is that learning is a change in time, whereas development is a change across time (Cook-Greuter, 1999). Some learning leads to developmental shifts, but in reality, most simply reinforce the person’s current stage (Laske, 2006). See Appendix 1 for a full description of Laske’s stages. Thus from Laske’s perspective, it could be argued that as we construct ourselves and our intention in the moment, the experience we pull into our construction takes time to unfold and absorb.
As we have seen so far, Laske is describing the behaviours of a Stage 3 person, which supports his earlier assertion. However, he is not stating how they think about others and how their thinking leads to these particular outcomes. It would thus be interesting to learn how the Stage 3 person constructs their meaning-making that leads to their stage of development.
Table 1.1: Changing orientation across adult stages (% of population)
Laske was specific in his thinking about higher levels of thinkers where he proposes that post-graduates have not reached a level of control, capacity, or proficiency in their thinking, so for them to embark upon graduate level work is potentially pointless. Laske (2015) and Cook-Greuter (2010) stipulate that students lack the cognitive capacity to meet the complexity demands found in these environments. Basseches (1984) demonstrated that within a university setting, faculty had a broader dialectical schemata range than third year students, who had a broader range than first year students, thus supporting the principles of adult growth in an academic context. Basseches (2005) goes deeper with the principle when describing a metaphorical perspective of students who articulated that lecturers/teachers are far too subjective when grading an exam or paper which suggests a lack of dialectical thought on their part.
Laske (2008) proposed a process for analysing the content of interviews to ascertain the most likely stage of adult development, based on the structural form of the interview content (how the participant said it), rather than the content itself (what they said). Laske (2015) states: only what can be measured can be managed (p348). However, the nature of his measurement of capacity and capability potentially ignores the underlying intention of the interviewee, because even Laske uses a Likert scale (Likert, 1932) for his clients to self-report their thinking capacity using his ‘Needs/Press’ questionnaire. This questionnaire generates data based on the subjective needs of the client in relation to the internal and environmental pressure the client feels in their organisation (p.297). Further to the point above, the Needs/Press questionnaire might miss the underlying intention of the client, and how this intention might change their environmental perceptions.
Laske, (2008) states that meaning-making (emotional) and sense-making (cognitive) lead to performance. However, it could be argued that there is no separation of emotional and intellectual development: there is only growth.
Cognitive complexity provides post-graduate students with the prospect of acquiring a clearer understanding of contextual (academic) variables. Denison, Hooijberg, and Quinn (1995) call this behavioural complexity and go on to say, should paradox exist in the academic environment, then it must be echoed in the behaviour [of PG students]. However, this is not necessarily the case if we were to consider the higher-level thinking behind the capacity to make the choice of appropriate behaviour in the moment. Where contrary or opposing behaviours are necessary, a low-complexity thinker, or a less self-aware thinker, would not have the capacity to choose a different behaviour, and it is this misunderstanding of developmental levels that Denison, Hooijberg and Quinn’s (1995) theory is potentially missing. They do, however, state that a high level of cognitive complexity allows for a more appropriate response to a wide range of situations, without actually demonstrating how one gains a high level of cognitive or behavioural complexity. To provide academic leadership, supervisors must also demonstrate behavioural complexity. This is the capacity to move between apparently contradictory positions with ease (Denison et al., 1995; Hooijberg & Quinn, 1992; Middlehurst, 2007). Supervisors thus must nurture and motivate students, deal with their individual issues, whilst simultaneously demanding that they complete assignments to the appropriate post-graduate standard on time. This requires the detailed monitoring of student performance. Not only must supervisors be capable of thinking paradoxically, they must also be able to behave accordingly with no discomfort (Eggen and Kauchak, 2013)
Behavioural complexity makes available a range of behaviours necessary for effective leadership within an academic context which can directly translate to the behavioural and developmental requirements of the post-graduate student. It could be argued that when discussing behavioural complexity, the authors were talking about the intention, awareness, choice and response of thinking and behaving of the individuals in the moment, and in context.
Broughton, (1984) argues that the ideas of Piaget and the growth beyond the postformal stages would be best served abandoned rather than revised, for not explaining why development occurs, and why some develop quicker than others. This is evident in a quote from Larivée, Normandeau, & Parent (2000, p828): “…a particular situation may facilitate one subject’s ability to solve a problem, whereas it may hinder another’s.” It would thus be interesting to discover the self-awareness (or lack of) for each individual to find the difference that makes the difference.
Commons and Richards (1999) suggest that postformal research does not actually talk about different stage development sequences, but instead about many different manifestations of the same stage sequence. This also aligns with Erickson’s and Arnett’s ideas on regression to lower stages, but it needs to be mentioned here that a visible flaw in almost all research based within universities is that emerging adulthood can only be achieved by affluent middle class adults who can afford to spend longer at university, participating in the introspection (Peterson & Merunka, 2014). This ethnocentric specificity is a drawback to stage development theory (Hendry & Kloep, 2007). The same can be said of Levinson’s stage theory as it involved predominantly American middle-class individuals. According to Robinson (2013), almost all of the research undertaken on stage development took place in affluent middle-class Western countries.
Hendry and Kloep (2007) also pointed out that categorising general developmental stages and transitions based on age is a potential issue. Laske (2015) states that his stage 4 cannot be achieved by children due to their lack of life experience, however, it might be possible to have a (Laske-esque) stage 4 thinking 20-year-old by virtue of their high self-awareness and subsequent choices of response. It is also possible that as the environment changes due to technological improvements or cultural influences, existing stages and stage transitions will change accordingly. For example: the Third Age has only been around as a recognised life stage for fewer than 50 years, and the Fourth Age is due to increased life expectancy (Robinson, 2013, p129).
From a theoretical perspective, it might be possible that a more individualistic approach to development is necessary, as we see the idea of life-stage development being over-ridden by individual stages and choices (Côté, 2006;). The greater the individuation, the more stage theory must adapt, and with a more flexible approach to adult development. How new cognitive structures emerge, either from assimilation or accommodation (see Bringuier, 1980) is an issue not only for Piaget, but also in other specific theories, such as: theories of action (e.g., Adolph & Berger, 2006), theories of learning (e.g., Jacobs & Michaels, 2007; Lattal & Bernardi, 2007), theories of language acquisition (e.g., Goodman, 1997; MacWhinney, 1999; Ramscar & Yarlett, 2007), and theories of organisation (e.g., Hummel & Holyoak, 2003; Kalish, Lewandowski, & Davies, 2005).
Gilligan (2005b) examined the failure of developmental psychology to take into account the perspectives of women. She noted that Freud’s: “difficulty in fitting the logic of his theory to women’s experience leads him in the end to set women apart, marking their relationships, like their sexual life, as ‘a dark continent’ for psychology” (p. 693). Kohlberg’s samples were exclusively men, and came under fire from Gilligan for the same reasons. It would be almost impossible to differentiate between male and female judgment, as an example.
King and Kitchener (1994, 2004) built on Piaget and Kohlberg to examine how reflective judgment evolves. From a post-graduate student perspective, this is important as critical thinking is aligned with reflective judgment. Reflective judgment occurs when one encounters a badly-structured problem (Churchman, 1971), or adaptive challenge (Heifetz, 1994) which “cannot be defined with a high degree of completeness, and that cannot be solved with a high degree of certainty” (King & Kitchener, 2004, p. 5). Reflective judgment moves from assuming that knowledge is certain to recognising that knowledge creation involves uncertainty. This, however, assumes a certain level of complexity in the actor’s thinking. Thirdly, to using evidence and an understanding of context to support one’s cognitive outcomes. Their work criticised two fundamental assumptions of stage models. Firstly, individuals operate at one stage at any given time, and using Fischer’s skill theory model, (1980) they confirmed developmental variability. Secondly, these stages were cross-culturally valid. They also looked at performance being domain specific, which is in contrast to ego stage models, where development is applied across all domains.
In summary, this section has demonstrated that the qualitative nature of stage development suggests that new objects of awareness at the higher stages are not within the limits of awareness for the lower stages (Richards & Commons, 1984). Also, according to the Model of Hierarchical Complexity (Commons, 1984) stage is a property of (or function of) subject behaviour or response (Commons, Trudeau, Stein, Richards, and Krause, 1998). Further to this, there are potentially a number of intentions that are not available to those at the lower stages of development but are available at the higher stages. It is not only the meaning-making that changes, but the intention behind the meaning being created.
As researchers focused on task complexity, they understandably came to the conclusion that: ‘stages are an epistemological competence internal to an individual that is separable from performance’ (Commons, et al, 1984). However, Commons and Richards (1984) claimed that one can only assess the stage of response required by a given task, and thus what is measured is simply one’s performance in said task, which comes with its own level of complexity. An important question to ask about Commons’ work is: what if the task is not the task, and ‘Thinking’ is the task? Commons did not consider this, and it warrants further discussion, as he also stated that there is no such thing as competence. If one were to consider thinking on a dynamic scale, with complexity demands increasing as one ascends, then the act of thinking becomes the task and competence might exist by a different name: dynamic intelligence. Commons et al, did not consider this perspective as far as is discernible in the literature and could be open for discussion later.
It would also seem there is more to the transition between stages, or sub-stages than is demonstrated in this review so far, and what is not being offered is the intention of the thinking and behaving in context. The intention behind a particular way of thinking, rather than a level of cognitive complexity might help to answer this puzzle.
It is apparent from the literature that the transition between stages is less-than coherent in the developmental theories reviewed. It is thus necessary to separate out the stage theory from the stage transition in order to discover the actual transition through the stages of development. Stage transition will be discussed next.
“… in psychology there are experimental methods and conceptual confusion”
(p. 232e). Wittgenstein, 1953
As discussed in the previous section, the concept of stages is somewhat contentious. It illustrates the question of whether it is actually possible to transition from one to the next, or even regress under duress. Research on the movement between stages focuses on two things: transition or transformation. ‘Transformation’ describes the qualitative difference between orders of consciousness in terms of meaning-making and how we move between them (Kegan, 1982), or in terms of Loevinger & Blasi, (1976), from one level of ego development to the next. ‘Transition’ is the incremental movement between stages and is explained in the Model of Hierarchical Complexity (Ross, 2008).
There is no reason to use stages as the de facto measure. It would be equally valid to ask about developmental phases, levels, cycles, layers, seasons and so on (Levinson, 1986). A number of psychologists use levels as their preferred measure: Fischer’s (1980) theory of thirteen hierarchical skills, Turiel’s (1983) social domains approach has seven major levels, and Karmiloff-Smith’s (1992) model of representational re-descriptions are three such examples. However, it is questionable as to whether another name for what is essentially a heuristic for the mapping, not necessarily the explaining of developmental change, is any more useful or clear. The development of children is primarily focused on strategies for skill acquisition which is not the main focus of this study. With this in mind, it is more appropriate to focus on the adult development stage transitions as mentioned.
If we consider the many adult developmental psychologists and researchers who extol the use of stages to demonstrate their theories, a pattern arises in each theoretical position in that one theory apes another, without deviating from an accepted norm. There is an element of isomorphism in Table 1.3, which shows this alignment. From Table 1.3, one can see the convention for naming and categorising stages of adult development.
Table 1.2: Developmental Stage Theories
However, this does not immediately give rise to the process of growth, the actions of development and the constituent parts of that process. What this review is interested in is the transition between these stages, an individual’s awareness of this growth, and the process of growth in one’s thinking that propels cognition vertically.
A secondary aim of this review is to understand the transition between these stages in a way Peacocke (2007) did not cover. A potential issue with Kegan’s (1996) and Laske’s (2008) interpretations of their level-specific behaviours is that they both assign higher levels on their respective scales than the behaviours should warrant, as the example behaviours
given do not demonstrate a high level of self-awareness in the moment. See Appendix 1 for a full description of Laske’s developmental stages.
Peacocke (2007) claims in his ‘Principle Hypothesis’, that mental events of judging, deciding, reasoning, and so on, are a class of action, and that they offer an efferent copy which also offers immediate and non-sensory information about them. Although this is motorsensory, Peacocke is nevertheless offering an explanation of how one’s thinking could be aware in the moment. The premise of his theory is that one can try (and fail) to believe something, and one can try (and fail) to forget something.
However, it is erroneous to claim that believing and forgetting are mental actions in themselves. Rather, they are events (or states) that direct action will bring about. Peacocke is rebutted frequently for being too shallow in his argument (Zimmerman, 2006; Fernandez, 2006) and further, it could be argued that he missed out the facet of ‘intention’ in one’s thinking that drives the mental behaviour in the first place. It could also be argued that trying to forget is an intention, which might be more key than the action itself, and Peacocke does not attempt to address this. The argument put forward by Carruthers (2009) to rebut Peacocke’s claims is not complex enough, and thus does not repudiate Peacocke’s argument nearly enough when he states that:
So while I am happy to accept the criterion that an action is an event that constitutively involves a trying, I want to emphasize that the fact that we describe ourselves as trying to decide, or trying to imagine, does not yet settle the active status of the attempted events. (Carruthers, 2009. p 142)
When asked how one can have first-person knowledge of mental events, Peacocke’s response was: in the same way we have knowledge of our physical actions. For this reason, Peacocke’s Action Awareness theory is not developmental, as it is focused on the motorsensory aspects of awareness. This could be labelled today as ‘somatic intelligence’ (Carruthers, 2009). Whilst fascinating, bodily knowledge has little to do with awareness of oneself as a cognitive being (Carruthers, 2009, p134). However, how we know what we know about our mental state (our epistemic knowledge) is arguably more fundamental than either knowledge of our traits, as in personality, or knowledge of our self as an ongoing mental state. The way we move from one state of knowing to the next, as described in the last section is the focus of this section.
Developmental stages form the foundation of Piaget’s (1983a) theory of cognitive development, Erikson’s (1968) theory of psychosocial development, Kohlberg’s (1984) theory of moral development, Commons’ (2008) model of hierarchical complexity, and so on. Stages of development are also accepted by several neo-Piagetian theorists (e.g. Case, 1985), and have been applied to various domains such as belief/faith (Fowler, 1981), progress in art (Gablik, 1977), aesthetic experience (Parsons, 1987), education (Egan, 1997) and more. Awareness is impaired by the notion that stage transition is slow, according to Armon and Dawson (1997) who showed that people generally transition [a stage] roughly every two years. The question arises here: is there a shortcut one can take that would expedite transitioning?
The fact that many developmental theories depend on stages to define their progress in differing fields lends support to the concept of stages existing, despite the variety of meanings made. Where executive control functions are at the heart of Case’s (1985, 1992) theory, they play no part in Piaget’s (1983) or Kohlberg’s (1984) theories. It could be argued then, that the position of meaning-making could be a potential problem for children and adults alike when referring to a stage. By extension, it could also be argued that the idea of stages is dominant in development research because stages are what is researched. A tautological argument being semantically argued for whilst testing for it becomes a self-fulfilling prophecy as we inevitably find what we seek.
Without this concept of stages though, we would not have the useful heuristics for mapping the developmental path of children and adults on a continuum (Lourenço, 2016). To paraphrase Voltaire (1768), if stages did not exist, it would be necessary to invent them. A variety of researchers have tackled the question of what creates development in a number of ways: social context has been a major contributor, with its signs, tools and practises influencing development (Vygotsky, 1978; Wertsch, 1985; Cole, 1988; Rogoff, Mistry, Goncu & Mosier, 1993). The nativist approach attributes development to the growth of our innate abilities (Gesell, et al., 1940), seen specifically in language development (Chomsky, 1980; Fodor, 1975). Previous thinking aligns a person’s genetic constitution and actual experiences as the driver for development (Gottlieb, 1991; Scarr, 1993). However, genetic constitution can be seen as an inadequate factor when it is not clearly defined. A more recent attempt by Plomin (2018) might shed some new light on this, however this is out of scope of the current study.
The scientific field of consciousness development can trace its origins back to Piaget (1948, 1954) on the cognitive development of children and adolescents. He showed later how cognitive development manifests in a meaning-making (Piaget, 1972) process that becomes increasingly more complex, which ultimately influences how we construct ourselves in the world. The process of refining one’s perspective of self in the world is possible with feedback loops (Nelson, 1996). Further, Piaget (1970) observed that the tautological implications of one’s mapping of the world reinforces our future mapping of the world, as we assume our maps must be correct, which is philosophically not the case. This tacit false understanding of one’s world limits our potential to really be aware of our construction of ourselves in the moment, as we assume we are correct in our construction. Nelson, Kruglanski and Jost (1998) demonstrate in their review of metacognition that the various sources of information made available when one tries to assess their self-knowledge and knowledge of others only really provides the ‘raw materials’ which then require interpretation in light of other implicit theories. This strongly contradicts the notion that we are in control of our thinking and behaving. It also emphasises that what we think we know is a result of a complex construction process (Lories, Dardenne & Yzerbyt, 1998).
The best way to demonstrate the need for growth, according to Piaget (1971), is when information comes in that contradicts our existing map of the world, and we are forced to remap our meaning-making system. Moving this research beyond adolescence, Wilber (2000) demonstrates a similar stage sequence for adults, where the stages of consciousness are comparable to world views, and meaning-making systems that are simultaneously cognitive, affective and functional (Cook-Greuter, 1999; Wilber, 2000). There are many claims by certain psychologists on what it is to be at a certain stage of development, from a person’s ability to recognise and remedy emotional experience, to determining a person’s profound life purpose, and how this affects behaviour in context (Cook-Greuter, 1999). These claims are much less profound when one reads the literature and understands the ‘how’ of this understanding is not explained in detail by the psychologists in question.
The study of the mechanisms of change have been undertaken since the early 1980’s by such psychologists as Kuhn & Ho (1980) and Kuhn and Phelps (1982) who used the methods of microgenetics to observe the evolution of finely-tuned behaviours over time. The fact that their processes involved observation suggests a level of experimenter input that is susceptible to the observer effect. Once merged with Vygotsky’s dynamic assessment method, this did provide a more robust measure of development over time, rather than the snapshot method employed previously (Kuhn, 2009), however, it still did not break down the specific facets of the developmental process.
There is a long history of the difficulty of demonstrating empirically the existence of developmental stages (Commons, Trudeau, Stein, Richards and Krause, 1998). Traditional stage theory has been criticised for failing to demonstrate that stages exist as more than random descriptions of observations of sequential changes in human behaviour (Kohlberg & Armon, 1984; Gibbs, 1977, 1979; Broughton, 1984). Fischer, Hand, and Russell (1984), along with Case (1985), have demonstrated the problems of mistaking developmental sequences of behaviour with traditional concepts of stage development in the search for empirical evidence. Sequential acquisition of behaviour can obviously be demonstrated empirically, even though it is still effectively only a snapshot. However, Campbell and Richie (1983) have suggested that an empirical demonstration of a stage would involve a qualitative difference between one stage and the next. This has proven more elusive (Commons, et al. 1998).
Commons, et al (1998)’s notion of ‘stage’ is based on the hierarchical complexity of tasks and then on the performance on those tasks by the actor. His General Model of Hierarchical Complexity uses the hierarchical complexity of tasks as the basis for his definition of stage (Commons & Richards, 1984), and is described as:
Roughly, hierarchical complexity refers to the number of nonrepeating recursions that the coordinating actions must perform on a set of primary elements. Actions at a higher order of hierarchical complexity: (a) are defined in terms of the actions at the next lower order of hierarchical complexity; (b) organize and transform the lower order actions; (c) produce organizations of lower order actions that are new and not arbitrary and cannot be accomplished by those lower order actions alone.
However, the decision-making process in the moment is missed by Commons et al, who admit that: “If one measures less often, one sees jumps in performance or gaps between subject performance measured at one time. If one measures more often, one may see what appears more like continuous acquisition.” It could thus be argued that this perspective is not about how often one measures, but the actual process of stage transition that is taking place, which appears to have an element of task familiarity: the more one does a task, the better one gets at it. In other words, there is an argument for no stage transition, or hierarchy of stages, and it is instead a continual holarchy of growth. With that in mind, the hierarchy of the stages is defined by a variety of psychologists, from Piaget to Vygotsky, Torbert to Cook-Greuter, Kegan to Laske, and as such, there is not one unified explanation despite them appearing to measure very similar theoretical outputs. The commonly accepted four stages of development are: pre-conventional (childhood), conventional, post-conventional and transcendent (Miller and Cook-Greuter, 1994). However, it could be argued that the act of labelling development a ‘stage’ movement has influenced the field to such an extent that developmental psychologists have been caught in a philosophical, ontological and tautological trap: we get what we look for. Further, it could be argued that what is missing is a unified theory that underscores the intention in the moment that leads to awareness and choice in a context-specific response. The true subject of developmental research is change. What has been omitted so far is the unconscious intention behind the change. Cross-sectional ‘snapshots’ of a person’s encounter with an experience can be developmentally limited in that they portray a different second approach to a problem based on the fact that it has been attempted previously (Kuhn, 2009 p109). According to Kuhn: the ‘dynamic assessment’ that goes back to Vygotsky, provides an informative picture of how an individual functions (p109). According to Kuhn & Phelps, (1982), in their microgenetic studies, individuals have a range of strategies that they employ over time, and the most common developmental change is where new, more effective strategies gain ground and older strategies are used less often. What seems to be important for growth is strategy selection, rather than simply measuring performance in any experiences. This potentially translates to a change in intention but is not defined as such by those mentioned. They go on to say that a more difficult challenge in the process of development is not the acquisition of new strategies but the letting go of old ones. This is contrary to how development was previously thought (Kuhn & Phelps, 1982). What is not discussed here is the intention behind the transition and how it drives the new strategies. A new strategy might not arise out of old behaviours without a fundamental shift in an individual’s awareness of the need to change and their intention in the moment.
If we consider the cognitive development of humans not as stages, but as a continuous and contiguous developmental ‘onion’, with each ‘layer’ interacting with its neighbour in the fashion of a holarchy rather than a hierarchy, then growth becomes much easier to understand and we remove the need for stage transition as an explanation for human development completely. Also, from an intention perspective, it becomes much simpler to explain. This opens the door to the idea of backward transition, whereby an individual moves from a higher level skill (not necessarily a cognitive capacity) to a lower level skill in order to construct the higher-level skill more effectively (Fischer & Granott, 1995; Granott, 2002). This allows the individual to be more flexible in devising solutions to more complex tasks. Livesay (2015) described this phenomenon as ‘fallback’ which he attributed to leaders as part of their growth to postconventional development. Like Fischer and Granott, he also saw this as a positive step as it often indicates an opportunity for accelerated growth once having fallen backwards.
The hierarchical and integrative organisation of the stages of complexity development is almost always illustrated as a spiral diagram, where it is implied that each successive stage is higher than the previous, and it incorporates and subsumes the strengths and weaknesses of the previous stage, as already mentioned (Piaget & Inhelder, 1969). When Kegan (1982) states (cited in Baron & Cayer, 2011):
“individuals who have reached the second conventional stage of consciousness, the so-called Expert stage, are able to easily communicate their opinion of the technical quality of a colleague’s work (an object of awareness), because they are less subject to the need for group approval and the conformist strategies that hold sway in the previous stage…”
He does not, however, say how he knows that the person can ‘easily communicate’ nor how they gauge the ‘technical quality’ of the work. It would be advisable to question the scale used. What lets them know what is ‘good’, and what does ‘quality’ mean. These are two very different ways of being within Kegan’s Expert stage, for which Kegan is either not explicit about or not aware of. This is not obvious in Kegan’s writing, who uses the word ‘transition’ and aligns it with tension or dissonance in a person’s thinking that leads to a struggle in the process of growth, and he stops short of describing the actual act of this transition, instead referring to it as a ‘continuum until fully embedded within the new (higher) order’ (Kegan, 1994). Piaget’s own proclivities on the use of stages are:
Why does everyone speak of stages? …One tries to construct stages because this is an indispensable instrument for the analysis of formative processes. Genetic psychology attempts to envisage the construction of mental functions, and stages are a necessary instrument for the analysis of these formative processes. But I must vigorously insist on the fact that stages do not constitute an aim in their own right. I would compare them to zoological or botanical classification in biology, which is an instrument that must precede analysis (Piaget, 1977, p. 817).
When we consider that the Formal Operational stage represents a higher level of equilibrium than the Concrete Operational stage because formal thinking includes negation and reciprocity, whereas concrete thinking only has one (negation or reciprocity, but not both. See Inhelder & Piaget, 1958; Piaget, 1960), we can also question whether there is way to further deconstruct these stages. This aligns with the proclivities of Dawson-Tunik, Fischer & Stein, (2004), who state that stages should be a vehicle for analysis, not a core process at the heart of the theory of development. They go on to say:
“we do not think that developmental stages should be the centrepiece of a developmental theory. At the centre of such a theory, we seek fundamental principles that can explain and predict developmental phenomena, not simply describe them. Stages are descriptions of phenomena. Even when stage definitions are highly abstract, they must point to observables. That is their value. They allow researchers to make structured observations of behaviour, and in doing so, provide the possibility of deeper insights into the functioning of the mind” (Dawson-Tunik, Fischer & Stein, 2004). [PAGE NUMBER]
Within Common’s (2008) model is a repeating pattern of transitional steps that lead to each stage, whether a simplistic movement or a complex transition (Ross 2008). This requires a level of persistence in order to achieve their goal, potentially in the face of complexity and ambiguity. One could ask whether it is their intention that will need to change in order to facilitate growth?
Commons (2011) and Ross (2008) are essentially saying that abstraction is the result of reflexivity, and the ability to transcend. The symbolism is increasingly more generic and ambiguous, which means it is open to interpretation and miscommunication from those people at the lower levels of adult development. Two people at Laske’s stage 5 have no such miscommunication as they got there on purpose and by choice, so the symbolism is accepted and thus not misinterpreted by either as each knows it is their interpretation only, not the other’s. Again, the intention behind the act of doing this is not explicitly deconstructed in Commons’ or Ross’ explanations.
Commons et al., (1999) also point to a concept called decentration, as opposed to concentration, which means one is willing and able to step out of one’s own perspective and adopt a new perspective on a given problem. In other words, the ability to imagine things that do not exist and to ‘bring them to life’ in the imagination, Commons calls ‘thinking in n-space’ (p297). Hindsight offers a perspective on development where Commons and Bresette (2006) frame the achievements of previous innovators as easier than expected due to the simple fact that the present is embedded in the ‘now’ and as such, everything has taken place since a particular innovation. These events were not available to the innovator at the time of conception or discovery, and any sense-making surrounding them is constructed wholly from a position of ‘now’. This creates a different lens through which to view the innovation or discovery (Commons & Bresette, 2006).
Another way of looking at stage transition is called ‘bridging’. Bridging is the process of leaping into the unknown (Granott, Fischer & Parziale, 2002) and is found within ‘microdevelopment’. Researchers in this field focus on the ‘how’ of development (and learning) and on attempting explanations to transitional processes (Granott & Parziale 2002). It is called ‘micro’ as it focuses on the process of change in the very shortest of timespans: from months to just minutes. Early studies by Saada-Robert & Brun (1996) and Inhelder & Cellérier (1992) confirmed and added knowledge to Piaget’s ideas on constructivism and interactionism. With constructivism, they emphasised that existing knowledge is not simply applied to a situation: it is reconstructed in accordance with the situation (Saada-Robert & Brun 1996). With interactionism, they emphasised the role of the situation/environment and ascribed a greater significance to ‘accommodation’ in the relationship an individual creates with their environment. The environment can be seen as micro and macro, where boundaries on development are set at the macro level, which affects the microdevelopment sequences within a given context according to Granott (2002). Granott (1998) also suggests that when social and physical environments are unrestricting, allowing for agency and initiative, they can encourage microdevelopmental progress, which in turn, promotes macrodevelpment progress. Research shows that bridging is a development transition mechanism that people use unconsciously across all development levels. The question one might ask here is: if bridging were undertaken from a position of intention and awareness, with a view to choosing to bridge, would the bridge change? This is not obvious in the literature.
As mentioned previously, a snapshot of a person’s state can be developmentally limiting (Kuhn, 2009) although the comparison of snapshots over time can provide an understanding of an individual’s capacity at specific stages or ages (Granott & Parziale 2002). However, there still remains a lack of understanding as to how change actually occurs (Siegler & Crowly, 1991). By looking for the microdevelopment processes instead of focusing on static snapshots, researchers have been able to highlight its main attributes by observing the process of change. Microdevelopment was viewed as a paradigm shift in cognitive growth theory (Lee & Karmiloff-Smith 2002; Granott 1998). Bridging allegedly provides one answer to the central question of how more complex structures can be achieved on the basis of less complex ones. It might be suggested that the argument is not for more or less complex structures, but the actor’s awareness of intention behind the structure in question. Granott (1993, 1993b) first observed the mechanism of bridging in a study on adults solving problems. “Bridging appears to be prevalent,” (Granott et al. 2002 p151). There are three ways to define the process: firstly, bridging is a partial, transitional step that, in itself, does not constitute a developmental level. Instead, it represents a search for new knowledge that can result in ‘a glimpse of new development’ (ibid. p134). Secondly, bridging operates with ‘not-yet-constructed’ knowledge. It sketches an unknown target level and sets an ‘anchor’ in the next level up and pulls the developmental process toward constructing this level (p134). Finally, during bridging, individuals function on two different levels of knowledge simultaneously. In an unfamiliar task, they work on it directly at a low level (comfort zone). They also work at a higher level where they construct a bridging ‘shell’, albeit still empty of content knowledge. They use this higher-level shell to guide their knowledge construction by gradually filling in the shell’s unknown component. Laske (2008) would ask the developmental question here: what are you not seeing that is equally important?
However, it is arguable that ‘bridging’ using a ‘shell’ is simply a synonym for ‘personal scaffolding’ and as such, offers nothing more to the investigation of how a person transitions from one stage to the next. All proffered explanations of what is taking place do not explicitly state the transition, only a description of the bridging process, which is likened to a physical bridge over a motorway. This is the same concern when reading other stage development psychologists in their area of expertise, such as Commons (1984) and Jaques & Clement, (1991). An example would be the following given by Granott, Fischer, & Parziale (2002, p141), on the transition between stages:
‘As Marvin was putting his hand around the robot, Kevin commented: ‘looks like we got a reaction there.’
Kevin implied but did not specify cause and effect by using the word ‘reaction’. Kevin meant this as a bridging term which alluded to the unknown cause. It is the word ‘reaction’ in this case that is the target-level shell as it implies more advanced knowledge currently out of reach as Kevin and Marvin have not specified a previous causal relationship. According to Granott (1993a), the empty shell indicated progress and assisted the construction of the missing knowledge. However, the process of construction and the object being constructed within the shell is missing.
The description goes further and includes ‘pillars for support’, but if we were to break down the above paragraph and discuss what is actually happening when Kevin comments on the outcome of the robot movement (apart from an assumption of agency), the entire paragraph can be negated based on a total lack of meaning-making by the authors. Kevin’s meaning is not explicit in his spoken sentence. Kegan (1982) might question the construction of the meaning by Kevin before allowing it to pass to the listener. Marvin’s meaning-making might not be the same as Kevin’s, and according to Laske (2009), it might not be as capable as Kevin’s thus creating an entirely different interpretation of the language used. Thus, a different meaning is implied or inferred by Marvin. How the authors define ‘reaction’ and impose a meaning-making structure around it, which they then label as a ‘shell’, is also problematic, unless there is a mutually-agreed meaning frame.
This then begs the question: what is the ‘bridge’ if it could be different for each person? The authors go on to say that: the content (cause and effect) of Kevin’s statement was still missing. By sketching a target level, the shell guides toward further development; except that this is not obvious from the literature. It could be argued that that what is occurring in Kevin or Marvin is a change in intention, which would lead to development, but this would have to be qualified with a change in their individual awareness. According to Granott et al., (2002: p142): “The vacant structure traces a goal for future development and, like an attractor, pulls the process toward it… The target level serves as a magnet, attracting the process of knowledge construction toward a more complex skill.”
The authors explain (ibid. p142):
The marker shells serve as place-holders that people use to direct their own learning and development toward achieving these targets… Bridging operates as an attractor in dynamic systems and pulls development toward more advanced, relatively stable levels. The shells serve as scaffolds that guide the construction of new knowledge by providing a perspective for processing new experiences.
The main criticism of this perspective is that the empty ‘shell’ serves as both the bridge and the scaffold, and yet the shell is derived from an intention by the actor to ‘achieve these targets’, which makes the whole theory circular and self-fulfilling. Setting the target, however, defines the ultimate goal that then guides further growth. “Bridging is a self-scaffolding mechanism that bootstraps one’s own knowledge” (ibid. p145). The targets are achieved by the shell bridging the gap [to the next level] by being its own scaffolding to the target [level] which attracts knowledge construction towards the new target [level]. The implication is that the missing cause is enough to direct the participants’ observations and actions (ibid. p142). The authors go on to say that after the bridging statement, Kevin and Marvin focused on discovering the missing causality. It is arguable that this is a shift in their intention due to a raised awareness which created a new choice in their thinking and behaving. Granott et al., (2002) even suggests that: “The phenomenon is paradoxical, since this goal-oriented process is guided by an unknown goal.” When said goals are set, the ways to reach them (bridging) is unknown. However, the intention is known: to reach a new goal. One is compelled to ask whether something has changed in their thinking in order to progress their intention to the goal. This seems to be missing from the explanations given by Granott, et al, who treat the bridging shell as a perspective guiding entity, rather than the seeking of missing knowledge. According to Harney, (2018) bridging clarifies how people create goals for their learning and development and then directs them to construct new knowledge. However, the ‘how’ of this construction is missing. It might also be considerably influenced by a person’s level of adult development (Kegan, 1994) in the sense that an individual at Stage 2 would have a very different mechanism for constructing their goal state and knowledge acquisition process to an individual at Stage 4. This again seems very circular. There is no suggestion that a person can self-develop when limited by their level of development as discussed by Laske (2008), where he suggests that those at the socialised mind stage of development (his stage 3) will not be capable of seeing the stretch necessary to grow their thinking vertically, unless it is pointed out by a more complex other.
What Granott, Fischer & Parziale (2002 p151) call a growth motivation, could also be considered an intention: “…we have demonstrated, a need can guide learning and development… A need expressed in a vague statement may be a sufficient trigger for developmental change.” Kuhn (2002) examined the systematic construction of knowledge and concluded likewise that: “we cannot fully understand the kinds of knowing and knowledge acquisition that people engage in without understanding their beliefs about knowledge.” In other words, their epistemic stance or, how they know what they know.
Other theories on stage movement are considered by McGuire and Rhodes (2009) who describe vertical stage development as a three-step process:
(i) a person awakens to new possibilities of sensing and doing things
(ii) the person then challenges and unlearns assumptions, and tests new assumptions
(iii) new ideas get stronger and begin to overtake the previous ones
Harney, (2018) makes the assumption that this is how individuals proactively shift to a later stage of development. However, this is an assumption because the explicit process in step two (ii) is ambiguous. As has been written, an individual’s capacity to ‘unlearn’ something is predicated on their awareness of their need to give up the information in order to move on to the next level of thinking. This is not possible in Laske’s (2015) CDF as his stage 2 person does not know what they do not know, and has no capacity to gain the information.
Palus and Drath (1995) also list criteria for an individual’s capacity and willingness for growth. Some deliberations include: openness to new ideas, complexity of job challenges, stability of current life circumstances, and environmental conditions. Marko (2011) investigated the triggers that allow developmental change to occur, defining a trigger as a construct that (p87):
provides the impetus for ego development to occur or signals the occurrence of ego development.
He asked participants of a questionnaire to recall a critical incident they felt had influenced or changed their perspective or way of being, and inferred from this a 3-phase process that had, very simply, a beginning (trigger), a middle, consisting of incremental changes, but as previously, no specific method of determining how or what constructed these steps, and a final step that allowed the individual to ‘let go’ (p91) of the old ideas and embrace the new understandings of the next level. This is akin to both Kegan’s (1994) method, and Laske’s (2008) process.
Petrie (2014) mirrored Marko’s findings and presented three alternate elements that constitute growth between levels of adult development:
Petrie (2015) later refined them and termed the three primary conditions for development as:
This multilateral model aligns with the findings of Manners & Durkin (2001) and Manners, Durkin & Nesdale (2004).
Every transition involves, to some extent, the killing of the old self… (Kegan, 1982)
Kegan (1982, 1994) suggests that consciousness development is brought about by experiencing the gap between one’s meaning making and the challenge they face. He goes on to say that it is rarely deliberate and can be painful. Kegan (1982) echoes others such as Vygotsky (1978) when he argues that human development cannot be considered independent from their social environment. Further to this, development, it could be argued, could be facilitated by the availability of what Winnicott (1965) referred to as ‘holding environments’. Vincent, Ward & Denson (2015) highlighted a similar finding (p242) that they also called ‘holding environments’ due to the naturally ‘disequilibrating’ nature of the experience. Ashforth et al., (2008) used the term ‘identicide’ to describe the same ‘disequilibrating’ phenomenon.
Deci, Ryan, and Guay (2013) state that the social environment plays a critical role in an individual’s cognitive development by supporting their psychological needs. It could be argued that the authors of the disequalibrium perspective, such as Kegan, have written the stage transition from the perspective of Stage 3, socialised mind, which would indeed feel an emotional pain by the apparent ‘loss’ of self if told this is what they should feel. However, from a Stage 4 perspective, it would simply be accepted as part and parcel of their growth as their awareness changes. In this sense, the context, or holding environment, becomes a contributing factor, but not necessarily a negative one.
Interestingly, Vincent, Ward & Denson (2015) explored the impact on vertical development in leadership programs in Australia, aligned with Manners’ and Durkin’s (2000) conceptual framework. A framework was created that provided a form of self-assessment which enhanced participants’ self- and other-awareness, such as peer assessments, coaching and case studies that stretched their thinking beyond their own organisational level. Vincent, et al., (2015) found that in order to trigger development, not surprisingly, more challenging psychological development activities were necessary. It follows then, that in the context of academia, if there is to be a fundamental shift in how post-graduate students construct their thinking in such a way as to facilitate vertical growth, they are going to need to be exposed to more adaptive problems as experienced in industry. If the context remains unchanged, no amount of development is going to influence the post-graduate student’s thinking without some disruption from a more complex guide acting as the ‘trigger’, as internal scripts and cognitive links are often resistant to change (Young & Wasserman, 2005).
Understanding a post-graduate student’s stage of adult development and ensuring a supervisor is at least at the same stage, will ensure the chances of a successful supervisory relationship (Garvey-Berger 2012; Bennet 2010; Kegan 1994; Laske 2006). Writing on the topic of vertical development, Cook-Greuter (2004 p277) states:
Only specific long-term practices, self-reflection, action inquiry, and dialogue as well as living in the company of others further along on the developmental path has been shown to be effective.
As per Vincent, Ward & Denson (2015)’s perspective above, Garvey-Berger (2012) gave the same advice: a problem should be one manager-removed. In other words, participants on any development programme should be solving the problems faced by their manager’s manager in order to stretch their thinking. From an academic perspective, this might not be possible. However, a post-graduate student can have their thinking challenged by the exposure to their level of awareness which will result in different behaviours in context.
The concept of stages has been rejected by a number of behavioural psychologists, most notably Skinner (Skinner & Vaughan, 1983). Behaviourists acknowledge that a chronological acquisition of various behaviours is necessary, however, have not yet provided a conclusive process or method. Gerwitz (1991) stated that Piaget’s stage process is particularly objectionable as it assumes an automatic growth rather than development being produced by the interaction between actor and environment.
From a post-graduate student’s perspective, this could be a significant difference that might influence their final grade. There are those who focus on IQ studies and maturation (Binet & Simon, 1916; Gesell & Amatruda, 1947; Terman & Merrill, 1937; Wexler, 1982) who consider development a sequence rather than a set of stages. There are also those who reject stages and who characterise development in terms of periods in one’s life. Erickson (1978, 1982) and Levinson (1986) thought these periods specialised. Flavell (1963) and Alexander, Druker & Langer (1990) thought there are only three or four broader periods in life and are seen as sequential but not hierarchical, and not strictly organised. Development from a period perspective might be characterised more as socialisation, whereas stage development, as mentioned, might be considered transformation. Finally, as mentioned in the previous section, a number of stage theorists have provided an analysis of the limitations on stages, such as Campbell and Bickhard (1986, 1992). These theorists undoubtedly accept the notion of adult stages and provide a psychological analysis of the stages. The issue, according to Commons (1984) and Jaques (1991) is that it neglects the task analysis that would support their developmental claims. It is appropriate that Campbell and Bickhard reject task analysis as an insufficient explanation to the problem of stage transition, but their logical analysis does not offer an explanation for what is the difference that makes the difference in task complexity. A levels account such as Campbell and Bickhard’s, and other developmental psychologists does not offer a sufficiently detailed account of inter-developmental steps. It is here that one might enquire about an individual’s intention to grow, or awareness of the growth potential as a possible catalyst, as well as any habituated patterns of thinking that unconsciously propel an actor to act.
The main principles discussed in this section are that the transition from one level of thinking to the next higher level is not explicit as a process by the respective theorists. It is hinted at, suggested and described; however, it is not explained in a way that would cement the theory as a stage transition.
As has been discussed, as a result of the work of researchers such as Loevinger, Kegan, Cook-Greuter, Torbert and many other scholars in this domain (e.g. Kohlberg, Erickson), there is a wealth of literature regarding what the various stages of adult development look like in terms of degrees of meaning-making complexity. There are also a corresponding number of measurement instruments which measure degrees of cognitive complexity, such as Laske’s (2007) Cognitive Development Framework, or Loevinger’s (1979) Sentence Completion Test. The general principle throughout the sections on stage development and stage transition is that an individual’s growth influences how he or she copes with uncertainty and ambiguity. For example, Kegan’s Stage 3 individual (socialised mind) will seek guidance from experts (External) to find the best way to achieve. Kegan’s Stage 4 thinker (self-authored mind) follows endogenous principles, whilst determining multiple ways of moving forward to achieve their goals. Kegan and Lahey (2001) describe the process as one of: “outgrowing one system of meaning-making by integrating it (as a subsystem) into a new system of meaning-making”. [PAGE NUMBER?] Kegan (1982, 1984, 1994) (and Laske, 2008) states that a person’s stage growth involves a tension between stagnation and movement within the cognitive, moral and psychological arenas, and can thus be a painful one that triggers a loss of self when moving, for example, from Stage 3 to 4. Were a post-graduate student to go through this transition whilst in academia, they might require scaffolding by a more complex other (MCO) as mentioned above. Developmental growth refers specifically to the transition by the individual to a more complex (cognitively-capable) mindset. This vertical growth is predicated upon an individual’s unconscious intentions, which are embedded in the subjective self and thus cannot be reflected upon until they are brought to awareness (made object) and then available for reflection (Yorks & Nicolaides 2012). How this reflection then results in a transition to a higher level of cognitive complexity is an enigma which numerous scholars continue to question (Spence & McDonald 2015; Harmer 2015; Hawkins 2015; McLaughlin 2014). One potential factor for growth is a person’s intellectual capacity. The general mechanisms underlying learning and problem solving, as discussed, depend on developmental processes that include an individual being more mentally efficient, capable, foresighted and flexible. An individual, whether child, post-graduate student or adult who demonstrates these could be considered more intelligent, traditionally speaking.
Even though Descartes (1637) summed up the essence of being with his: cogito ergo sum, he did not presume one acquires knowledge in their thinking. However, it is clear that both thinking and knowledge acquisition take place internally (Brown, 2016; Rumelhart, Hinton, and Williams, 1985). It would be easy to fall into the trap that thinking is learning, however research shows it is not that simple (Bannister and Fransella, 2013). The fields of cognitive psychology and developmental psychology can be used to form a guide for a way of determining a student’s preferences for thinking (Armezzani and Chiari, 20014; Flavell, 1976).
It has been argued that an individual’s identity and thinking patterns contribute to their preferred cognitive styles (Borgotta, 1964; Buss, 2009). What is evident from the literature is that one’s personality, biology, and specific demographic variables only serve as a knowledge base or foundation, and not the actual thinking intention (Bluck, Alea, and Ali, 2014; Chen and Lee, 2013; Figueredo et al., 2010; Hopwood et al., 2011). These variables act as a catalyst from which an individual builds up their personality. Students do not think or apply thinking to their learning through a process of steps because of what type of personality they might have (Evans, 2010). Every person thinks, and every baby is born on a relatively even playing field (Blonigen et al., 2005), without prior knowledge. However, they do construct knowledge, through developmental stages as a result of interactions with the environment (Piaget, 1954; Blonigen et al., 2005). How one takes these interactions and approaches the thinking process, allows links to one’s natural ability for thinking, and then puts in place the measure for those preferences for thinking (Buss, 2009; Goldberg et al., 2006).
There is a difference between thinking and the application of what is thought, reflected, or inquired (Bannister and Fransella, 2013). Thinking is the action, and possibly learning is the by-product (Baumann et al., 2014; Bock and Kim, 2002). A person’s potential to form an awareness of their intention remains conditional on their level of adult development (Kegan, 2009), which within the literature, interlinks to form theories about how thinking takes place within the social interaction for cognitive constructivism (Burr, Giliberto, and Butt, 2014). The modern world has created such a strong ideology for individualism, and it is this individualism that has led to a sense of being ‘special’ (Burr, Giliberto, and Butt, 2014; Buss, 2009). How this influences the cognitive constructivist perspective is intriguing because of individualism defining one’s goals and dreams (Borkenau et al., 2001; Cattell, Eber, and Tatsuoka, 1970), which in turn is the accumulation and combination of one’s experience in context. Each person’s thinking is both limited by and liberated by their sense of uniqueness, with goals and intention being discoverable via the deconstruction of their thinking styles.
It would be beneficial for educators to understand how students think, allowing strategies and curricula to be adjusted to allow learning to fit into the thinker’s approach (Touw, Meijer, Wubbels, 2015). This allows for a teacher/student interaction that takes advantage of a person’s style of thinking (Costa, et al., 1984; Brown, 2002). Acknowledging the fact that different thinking styles and approaches exist is paramount (Merlevede, 2005). Taking into consideration how the individual thinker will apply cognitive constructivism from the social range of filtering experiences remains of interest (Ashton, 2004; Burr, Giliberto, and Butt, 2014; De Raad and Perugini, 2002). As we read in the previous section, knowledge of one’s stage of development allows educators to examine the relationship between their teaching style and the student’s preferred learning style (Pashler, 2009). In a continuously stimulating and challenging academic environment, it would be more beneficial to know how post-graduate students think rather than what they think or know (Touw, Meijer, and Wubbels, 2015). In order to have a more profound idea of how, we must look at intelligence in general and how it might apply to post-graduate students.
In 1994, a group of over fifty experts in the scientific study of intelligence and associated fields provided the following cohesive definition of intelligence in the Wall Street Journal:
Intelligence is a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience. It is not merely book learning, a narrow academic skill, or test-taking smarts. Rather, it reflects a broader and deeper capability for comprehending our surroundings — “catching on,” “making sense” of things, or “figuring out” what to do.
This could be considered a judicious definition of intelligence. It includes a description of behaviours relating to attention, perception, and learning that are key aspects of intellectual functioning. It also allows for the common definition of how ‘smart’ a person is by covering the obvious notions of reasoning and problem solving (Kaufman, 2015).
According to Gardner (1983), and Kornhaber, Fierros & Veneema (2004), there are certain criteria for the identification of an intelligence:
Within academic psychology, Spearman’s theory of general intelligence (g) is the prominent concept of intelligence taught (Brody, 2004; Deary et al., 2007; Jensen, 2008) and the basis for more than 70 IQ tests on the market (e.g., Stanford-Binet Intelligence Scales Fifth Edition, Roid, 2003). Some contemporary researchers have maintained that intelligence is influenced by environmental factors (Diamond & Hopson, 1998; Lucas, Morley, & Cole, 1998; Neisser et al., 1996; Nisbett, 2009). Others suggest it is innate and one can do little to change it (Eysenck, 1992; Herrnstein & Murray, 1994; Jensen, 1987, 1998; Plomin, 2018). Other psychologists such as Thorndike, (1920; Thorndike, Bregman, Cobb, & Woodyard, 1927) conceived of intelligence as the sum of three parts: abstract intelligence, mechanical intelligence, and social intelligence. Thorndike’s ‘Social Intelligence’: ‘an ability to understand men and women, boys and girls; to act wisely in human relations’ (as cited by Salovey and Mayer, 1990) can be thought of as:
This list aligns with Kegan’s (1994) Socialised-Mind Stage 3 thinker. This is limited in its approach to (higher level) cognitive thinking as it is hamstrung by the emotional needs of the thinker. However, as Cronbach (1960) stated, “fifty years of intermittent investigation … social intelligence remains undefined and unmeasured.” Thorndike himself acknowledged the fact: “whether there is any unitary trait corresponding to social intelligence remains to be demonstrated.” (as cited by Salovey and Mayer, 1990). However, there might be an argument for a social need or drive as per Kegan’s and Laske’s developmental perspectives.
There is a long list of psychologists who have considered deconstructing ‘intelligence’ into a variety of categories. Thurstone (1938, Thurstone & Thurstone, 1941) argued that intelligence could better be understood as consisting of seven primary abilities. Guilford (1967; Guilford & Hoepfner, 1971) conceptualised intelligence as consisting of four content categories, five operational categories, and six product categories; he eventually suggested there are 150 different intellectual faculties. Sternberg (1985) offered a triarchic theory of intelligence that identified analytic, creative, and practical intelligences. Finally, Ceci (1990, 1996) described multiple cognitive capacities that allow for knowledge to be acquired and relationships between concepts and ideas to be considered. This was followed by Wechsler in 1940, who stated: [PAGE NUMBER]
“The main question is whether non-intellective, that is affective and conative abilities, are admissible as factors of general intelligence. [My contention] has been that such factors are not only admissible but necessary. I have tried to show that in addition to intellective there are also definite non-intellective factors that determine intelligent behaviour. If the foregoing observations are correct, it follows that we cannot expect to measure total intelligence until our tests also include some measures of the non-intellective factors.” (Wechsler, 1943).
This led to Gardner’s Multiple Intelligence theory, discussed in the next section.
However, neither Willingham (2004) nor other ‘geocentric’ theorists have as yet provided an acceptable definition for General Intelligence. One might argue that g is simply the common factor that underlies the set of tasks devised by psychologists in their attempt to predict academic success. An element of isomorphism. It is also possible that g measures speed or flexibility of response; capacity to follow instructions; or motivation to succeed at an artificial, non-context-specific task. When one calls something an ‘intelligence’, then the ‘thing’ becomes reified in logic and an agreed definition of what it is and what it might demonstrate is wrapped up in the meaning-making for that word. If we were to consider ‘intelligence’ as a misnomer, how would this open up the debate on what is an intelligence, and how we then define it not as a skill acquisition, but a dialectic approach to thinking in the moment that defines a thinking capacity or capability? Although Spearman’s ‘general’ factor of intelligence has been discussed here (Jensen, 1998; Spearman, 1927; see essays in Sternberg, 2000; Sternberg & Grigorenko, 2002b), in reality, no one is good at everything or bad at everything. Part of successful intelligence is deciding what to change, and then how to change it (Sternberg, 2003a). To go one step further and question the intention and awareness of the individual making the change would offer an awareness of what it is to have ‘successful intelligence’ in this context.
Although Multiple Intelligence theory challenges Spearman’s (1927) concept of general intelligence, it is not the only one to suggest intelligence is pluralistic. The theory of multiple intelligences was developed by Gardner in the late 1970’s and early 80’s, and it hypothesises that individuals acquire eight (or more) relatively autonomous intelligences. Individuals utilise these intelligences individually or collectively to solve problems (Gardner, 1983, 1993, 1999, 2006). They are:
Musical-rhythmic: These individuals are sensitive to sounds, rhythms, tones, and music. They have sensitivity to rhythm, pitch, meter, tone, melody or timbre, and are quite discerning listeners.
Visual-spatial: This area deals with spatial judgment and the ability to visualize with the mind’s eye. Spatial ability is one of the three factors beneath g in the hierarchical model of intelligence.
Verbal-linguistic: People with high verbal-linguistic intelligence display a facility with words and languages. They are usually good at reading, writing, telling stories and memorising words. Verbal ability is one of the most g-loaded abilities.
Logical-mathematical: This area has to do with logic, abstractions, reasoning, numbers and critical thinking. This also has to do with having the capacity to understand the underlying principles of some kind of causal system. Logical reasoning is closely linked to fluid intelligence and to general intelligence (g).
Bodily-kinaesthetic: The bodily-kinaesthetic intelligence augers control of one’s motor skills and the capacity to handle actions and objects skilfully. Gardner goes on to say that this also includes a sense of timing, along with the ability to train physical responses.
Interpersonal: In theory, individuals who have high interpersonal intelligence are characterised by their sensitivity to others’ moods, feelings, temperaments, motivations, and their ability to cooperate in order to work as part of a group. Those with high interpersonal intelligence communicate effectively and empathise well with others and may be either leaders or followers [very vague]. They often enjoy discussion and debate. Gardner has equated this with emotional intelligence of Goleman. This seems like a lot of responsibility for one ‘intelligence’.
Intrapersonal: This ‘intelligence’ is about one’s capacity to be introspective and self-reflective. It implies a deep understanding of the self, recognising one’s strengths and weaknesses, how we are unique, and being able to predict one’s own reactions or emotions.
However, there seems to be an assumption in the intelligence theory that people in general are capable of knowing themselves to such an extent as to have an effect on their over-all world view, and we have seen in the stage development section from the likes of Kegan (1994) and Laske (2015) that this is not the case. The ‘deep understanding’ mentioned in this particular intelligence is thus limited by a person’s capacity to know themselves: Stages 2 to 3 on Kegan’s scale. Someone at Stage 5 could have this intelligence in abundance, but whether this could be measured in this context is questioned by Laske (2007) in his CDF system.
Naturalistic: Although not originally part of Gardner’s seven, naturalistic intelligence was proposed in 1995. Gardner stated:
“If I were to rewrite Frames of Mind today, I would probably add an eighth intelligence – the intelligence of the naturalist. It seems to me that the individual who is readily able to recognize flora and fauna, to make other consequential distinctions in the natural world, and to use this ability productively (in hunting, in farming, in biological science) is exercising an important intelligence and one that is not adequately encompassed in the current list.” [PAGE NUMBER]
This ‘intelligence’ is about recognising one’s place in the natural surroundings. Examples include classifying natural forms such as flora and fauna. This ability would have been of value in our past as hunters, gatherers, and farmers. However, this seems to be a very large generalisation when it comes to what is an intelligence (the word ‘ability’ is mentioned) in that the nature of semantic memory, procedural memory and declarative knowledge (Cohen & Squire, 1980) seem to be better descriptors for what is actually happening when someone recalls the name (and more) of specific flora.
The difference between an intelligence and a skill (or ability) is a common source of confusion. Skills are the cognitive executions that result from the operation of one or more intelligences (Gardner & Moran, 2006). Sternberg’s Successful Intelligence distinguishes between adapting, shaping and selecting one’s environment, which goes beyond the convention for intelligence’s broad definition of ‘adapting to the environment’. In order to be successful, one must formulate a meaningful and coherent set of goals, and then have the skills and dispositions to achieve those goals. However, one could ask here how one would create the meaning behind the goal, and the difference from an adult development perspective across the stages of awareness. When Gardner discussed the creation of meaningful goals, he did not consider a person’s capacity to create and think about the long-term ramifications of their decisions. Instead, he focused on three sub-items regarding the identification of meaningful goals, coordinating them in a meaningful and coherent way, and the movement towards those goals by the individual. However, their capacity as a thinker, and how they construct their thinking will greatly influence what Sternberg called ‘Successful Intelligence’. Also, what appears to be missing from the description is the intention behind the goal-creation, not just the goal itself.
Gardner’s ninth inclusion was ‘existential intelligence’, which was about the ‘big questions’ such as our place in the cosmos, the significance of life and death, the experience of personal love and of artistic experience (pp53-65). He considers religious and philosophical thinking as part of our intellectual world, which were insufficiently represented in the 1983 schema. An argument against this is the concept of religion as a superstition. Anything believed without evidence is a superstition, unlike philosophy, which is a debate. The thinking required to believe something without evidence is not the same thinking in complexity terms as one’s ability to discern Hegel’s dialectic, for example. There is an element of philosophical debate within theology, however, it would need to be separated out as a specific way of thinking (about itself), which would then place it within the philosophical arena. The intelligences discussed here have an element of immediate awareness that is missing from Gardner’s theories. For example, by understanding ‘spatial’ intelligence, it is possible to enquire into how much of an awareness (and thus a choice of response) an individual has in their ‘spatial intelligence’. For a deconstruction of Gardner’s intelligences by Meta-Programmes, see Appendix 11.
Kornhaber, Fierros, and Veenema (2004) compiled data on educators’ perceptions of the impact of MI-based methods within education, using interviews and questionnaires. Others have extended this research (e.g., Barrington, 2004), and included how it can be applied to the curriculum (e.g., Dias-Ward & Dias, 2004; Nolen, 2003; Ozdemir, Guneysu, & Tekkaya, 2006; Wallach & Callahan, 1994); how MI functions within and across different schools (e.g., Campbell & Campbell, 1999; Greenhawk, 1997; Hickey, 2004; Hoerr, 1992, 1994, 2004; Wagmeister & Shifrin, 2000). MI approaches have been credited with greater performance and a better retention of knowledge when compared to a more traditional approach (Ozdemir et al., 2006). It has also impacted how children understand content in more complex ways (Emig, 1997). Similarly, teachers have benefited from an MI framework when making instructional decisions (Ozdemir et al., 2006). Teele, who devised one of the primary MI self-administered instruments, suggests that:
“…intrinsic motivation, positive self-image, and a sense of responsibility develop when students become stake-holders in the educational process and accept responsibility for their own actions” (1996, p. 72).
It was anticipated that academia tended to favour children with good memories and some analytical abilities, so from an MI perspective, researchers explored the question of whether standard education in schools discriminated against children with creative and practical strengths (Sternberg & Clinkenbeard, 1995; Sternberg, Ferrari, Clinkenbeard, & Grigorenko, 1996; Sternberg, Grigorenko, Ferrari, & Clinkenbeard, 1999).
An important question for understanding intelligence is: how does one use the criteria to determine what an intelligence is? Gardner (1983:62) makes it clear in “Frames of Mind” that there is no ‘algorithm for the selection of an intelligence, such that any trained researcher could determine whether a candidate’s intelligence met the appropriate criteria’ (p.63). He goes to say that: ‘the selection or rejection of a candidate’s intelligence is reminiscent more of an artistic judgment than of a scientific assessment.’
Gardner’s and Sternberg’s theories are popular with textbook authors, even though their foundations have limited empirical support. Researchers studying cognitive abilities often produce ‘g’ in their data (e.g., Pyryt, 2000). It has been suggested that textbook authors should present both sides of the argument when representing these theories, as there are many arguments for the weaknesses of both (e.g., Deary, Penke, & Johnson, 2010; Gottfredson, 2003a, 2003b; Lubinski & Benbow, 1995; Waterhouse, 2006), as are rebuttals (e.g., Gardner, 1995; Sternberg, 2003; Sternberg & Hedlund, 2002). For example, a balanced approach to Gardner’s theory would demonstrate how his denial of ‘g’ is counter-to the evidence over the last century, and that his logical-mathematical intelligence (for example) aligns well with certain factors in modern intelligence testing. Also, he had neglected to systematically research his theories by gathering data (Hunt, 2001; Lubinski & Benbow, 1995). From Sternberg’s perspective, his emphasis on creativity is a valued trait (e.g., Guilford, 1950; Subotnik, Olszewski-Kubilius, & Worrell, 2011). However, his practical intelligence is seen as less important than g, which weakens the theory (Gottfredson, 2003a).
White (2006) is one of the few scholars to question the efforts of Gardner. He suggests that Gardner’s selection and application of the criteria is a subjective one, and thus flawed. A different specialist would have had different criteria and consequently, would have arrived at a different set of intelligences. White (2006) went on to say that the practical and creative endeavours are no less valuable than being good at more abstract subjects, which suggests that MI theory is less important to children or post-graduate students, than it is assumed. Warne, Astle & Hill (2018) found that three-quarters of psychology text books give disproportionate coverage to theories such as Gardner’s Multiple intelligences, with 80% containing logical fallacies in their discussion of the topic (Warne, et al., 2018).
Finally, an issue that is prominent in the field of psychology is that even eminent psychologists present a misleading view of the science of intelligence in academic text books, voraciously devoured by the average psychology student (Jarrett, 2018). This serves to mislead the student on what is and what is not good psychology, especially if a liberal message appears exaggerated by the media about the study in question (Ritchie, 2017). Therefore, a more discerning post-graduate student is necessary to counter the potential for misinformation in all academic endeavours, not just psychology.
Constructivists Think It’s About Them
Cognitive psychology as a discipline would have us believe that individuals are born with some cognitive and epistemological equipment. However, there are disputes between the different constructivist theorists as to what these are (Phillips, 1995; Dweck, 2000; Loewen, 2011). Blonigen et al., (2005) are differentiating knowledge from epistemological equipment when they say we construct and acquire our knowledge after birth as a result of our interactions with our environment. In an academic context, cognitive psychology states that learning is an active process (Gagne & Goldsmith, 2011). Human knowledge, whether personal or associated with a field or discipline of study, is constructed internally, within and by each human (Phillips, 1995). Following on from this, in order to understand how human knowledge forms within the cognitive psychology discipline, we need to look at the relationships between thinking patterns and strategies. For example, Elliott & McGregor (2001) examined the thinking processes that students engaged in while writing research papers. She found that: “students do not instinctively operate in a metacognitive manner”. Other researchers found that a lack of metacognitive ability negatively impacted student success (Hill & Levenhagen, 1995; Land & Hannafin, 1997). This lack of metacognitive skill supports the need for students to “plan, implement, and evaluate” strategies for learning (Palincsar, 1986). This is in line with the previous point that students can match their learning style to their lecturer’s teaching style if their level of awareness were raised regarding their own thinking style. However, Cavanaugh, Grady & Perlmutter, (1983) have demonstrated in their research that students use a strategy when required but fail to use it when the requirement is removed. This might suggest there is an alternative unconscious cognitive intention at play instead. If there were a way to bring to objective awareness the pattern used by each individual student, this could be transferable across teaching subjects.
Metacognition is a configuration that is suggested as ‘fuzzy’ by many scholars and has a wide range of meanings (Akturk & Sahin, 2011). Cognitive psychology lies at the foundations of metacognition studies (Hart, 1965; Peters, 2007), as does cognitive development psychology (Piaget, 1950; Steinbach 2008), and social development psychology (Tsai 2001; Vygotsky, 1962). Hart (1965) was more concerned with how memory was perceived as a predictor of behaviour (Peters, 2007). Conversely, Piaget (1950) was talking about personal information epistemology when he mentioned “knowing the knowing and thinking the thinking” (Steinbach, 2008). Vygotsky (1962) focused on the early years and maintained that consciousness and conscious control were basic contributors (Tsai, 2001). More recently, ‘metacognition’ has been used as an umbrella term incorporating the concepts that are related to an individual’s thinking processes (Leader, 2008).
The self-regulation facet within metacognition has been linked to intelligence (Borkowski et al., 1986; Brown, 1987; Sternberg, 1984, 1986). In his triarchic theory of intelligence, Sternberg refers to these executive processes as ‘metacomponents’ (Sternberg, 1984, 1986), as they control other cognitive components as well as receive feedback from them.
According to Sternberg, metacomponents are responsible for “figuring out how to do a particular task or set of tasks, and then making sure that the task or set of tasks are done correctly” (Sternberg, 1986b, p. 24). This involves planning, assessing and monitoring problem-solving activities. Central to Sternberg’s interpretation of intelligence is one’s capacity to appropriately allocate cognitive resources to tasks, such as deciding how and when a given task should be accomplished. These are important to post-graduate students, and students at all stages of education (Sternberg, 2000). However, what is not clear is how a student knows how to evaluate their task in relation to the context and themselves.
Pursuing knowledge, creating knowledge, and promoting the flow of knowledge, from knowledge to knowledge-set also opens the post-graduate student’s eyes about self-perception and how they view their own knowledge base (Hall, 2008; Touw, Meijer, and Wubbels, 2015). For the post-graduate student, metacognition remains important for the persistence to stay on course with their studies (Touw, Meijer, and Wubbels, 2015). Over the course of study, students are expected to develop a cogent argument for a research topic. There is immense pressure in research to not only select the right topic but to write a study that is both unique and brings value to the academic community (Lynch, 2014). One would assume that study of metacognition not only allows a student to gain knowledge about learning, but also about their thinking and their ability to adapt to the environment (Merlevede, 2005). Kostons and van der Werf, (2015) assert that metacognition is a subject to be taught, and a process to be applied in order to allow pattern recognition in relation to thinking and sense of self.
Metacognitive knowledge, according to Kostons and van der Werf, (2015) and Thurstone, (1938) is when the learner acknowledges awareness of learning or perceives his or her style of learning. What a student understands to be learning, and how learning relates to their performance influences how they learn (Kostons and van der Werf, 2015). To form coherent thinking strategies requires self-reflection and analysing the strengths and weaknesses of a student’s learning style, encourages and reinforces independent study (Loehlin, Lewis & Goldberg, 2014).
The ‘meta’ implies a higher-order thinking about one’s thinking and has two dimensions: metacognitive knowledge and metacognitive regulation. Metacognitive knowledge includes the learner’s knowledge of their own cognitive abilities, (e.g., I have trouble remembering people’s names), the learner’s knowledge of particular tasks (e.g., the ideas in this article are complex), and the learner’s knowledge of different strategies including when to use these strategies (e.g., if I break telephone numbers into chunks I will remember them) (Brown, 1987; Flavell, 1979).
Metacognitive regulation describes how individuals monitor and control their cognitive processes. For example, realising that the strategy you are using to solve a maths problem is not working and trying a different approach (Nelson and Narens, 1990). However, knowing not to use the same strategy to write an essay will be discussed next.
A theory of metacognitive regulation that is commonly cited in the research literature is Nelson and Narens’ (1994) Model of Metacognition, and it makes two specific assumptions: the first is that the meta and object levels are related in a hierarchical, though asymmetric, manner via metacognitive control and monitoring. A second assumption is that parallel processing occurs between levels.
Our thinking is said to occur at the object level. An example would be a post-graduate student trying to understand the meaning of some text. This goal would be a cognitive strategy at the object level (Nelson and Narens, 1990). The meta level, on the other hand, is where ‘thinking about thinking’ takes place (Nelson and Narens, 1990). From a reading perspective, a student would gauge how well they have understood the text and re-read it if not very well. This is an example of ‘monitoring’. If happy with their comprehension, they will continue. In essence, based on their monitoring feedback, the student would choose one of those behaviours, which is considered to be a control processes.
Perkins (1992) defined four categories of metacognitive learners: tacit; aware; strategic; reflective:
‘Tacit’ learners are unaware of their metacognitive knowledge. They do not think about any particular strategies for learning and merely accept if they know something or not.
‘Aware’ learners know about some of the kinds of thinking that they do – generating ideas, ending evidence, etc. – but thinking is not necessarily deliberate or planned.
‘Strategic’ learners organise their thinking by using problem solving, grouping and classifying, evidence seeking, decision making, etc. They know and apply the strategies that help them learn.
‘Reflective’ learners are not only strategic about their thinking but they also reflect upon their learning whilst it is happening, considering the success or not of any strategies they’re using and then revising them as appropriate. For an example of this categorisation in action, see the work of Harvey (2000) which specifically demonstrates how to teach thinking strategies for educators so students become engaged.
Metacognition promotes the monitoring of our cognitive processes, which is important from a post-graduate student perspective in an academic context (Flavell, 1976). Most theoretical accounts of metacognition distinguish between two main components that include knowledge of cognition and regulation of cognition (Baker, 1989; Schraw & Moshman, 1995). Knowledge of cognition states what we know about our cognition, and typically includes three components (Brown, 1987; Jacobs & Paris, 1987). The first, declarative knowledge, includes knowledge about ourselves as learners and what factors influence our performance. For example, adult learners are aware of the limitations of their memory and can plan an appropriate action based on this understanding. However, a person’s capacity to know their intention in the moment, and the awareness of this intention will affect their response.
Procedural knowledge is eponymously titled as it refers to strategies and other procedures, and we know how or when to use them due to our Conditional knowledge. However, the act of using a strategy is a strategy and not using a strategy is also a strategy, so the act of choosing a strategy could be considered a meta-strategy with a different set of intentions and an awareness of the outcome being one level higher than the (outcome of the) strategy itself. This point is somewhat validated by the regulation of cognition with its planning, monitoring, and evaluation aspects (Jacobs & Paris, 1987; Kluwe, 1987). Previous research suggests, rather unsurprisingly that experts are far superior to novices largely due to more effective planning, and in particular, global planning that occurs prior to a task beginning (Bereiter & Scardamalia, 1993). An expert has an experiential thinking style that allows them to automate the initial steps of the process, making planning a more unconscious act, based on prior experience. When the same person monitors their ability to learn (knowing whether they have understood or will understand something), their capacity to perform this monitoring will be dependent upon their level of awareness, which could suggest why some adults are skilled at learning, but not good at monitoring (Koriat, 1994; Pressley & Ghatala, 1990). Finally, evaluation refers to the appraising of the regulatory processes. How one re-evaluates a personal goal, and the ramifications of the short- and long-term decisions around this goal will also be dependent upon an individual’s self-awareness. It has been discussed already that people in general are not good at recognising their own incompetence (Kruger & Dunning, 2000), and conversely, competence is a prerequisite for judging one’s relative performance. This naturally leads to the ‘double curse’ that those less competent in a specific domain will also lack the capacity to recognise what is a competent performance in said domain (Kruger & Dunning, 1999). The knowledge of cognition and regulation of cognition appear to be related in both children and adults, which makes this a useful comparison for post-graduate students in an academic context (Schraw & Dennison, 1994). What emerges from the literature is that metacognition is not necessarily related to cognitive abilities. Neither Pressley and Ghatala (1988) nor Yan (1994) found significant relationships between monitoring proficiency and measures of verbal ability. Schraw, Horn et al. (1995) did not report a significant relationship between knowledge and regulation of cognition scores. This pattern of findings suggested that traditional measures of ability are related to performance on cognitive tasks such as reading comprehension but are unrelated to regulating performance. However, there are links between deliberation and superior cognitive performance. Deliberation is related to and can even cause differences in domain general cognitive abilities, such as intelligence and attentional control (Stanovich, 2012). Deliberation is also thought to be an essential component of rational thinking (e.g., reflectiveness and active open-minded thinking; Baron, 2008).
However, metacognitive abilities are not necessarily naturally endowed in graduate students (Wilson & Conyers, 2016). Successful learners employ a variety of metacognitive skills to improve their learning (Garner & Alexander, 1989; Pressley, Borkowski, & Schneider, 1987). In contrast, metacognition does not appear to be related strongly to measures of intellectual ability (Schraw, Horn et al., 1995). However, a lack of metacognition is not indicative of a lower cognitive performance, although in the middle-aged subgroup, there is a consistent tendency to lower scores (Tavares, 2018). Flavell (1979) suggested there was “too little” cognitive monitoring by adults. He also suggested that adults would benefit from being taught how to monitor their thinking in order to “make wise and thoughtful life decisions” (1979, p. 910).
What emerges from a metacognitive perspective is a person’s ability to know, in the moment how their thinking is influenced by the relationship between their intention and awareness, and as such, where knowledge can be conceptualised as either concrete (i.e. facts and procedures) or abstract (i.e. concepts and principles), skill development is often domain-specific (Fischer, 1980). Although it can be argued that abstract thinking is better than concrete thinking, abstract thinking becomes a limiting intention, thus contradicting the last point in context. Where a child has been taught a metacognitive strategy for learning, he has been taught a process. From a post-graduate student’s perspective, if the lecturer knew how the student’s thinking were constructed, they could create a strategy specific to the student’s thinking style, rather than a generic strategy, which brings into question whether the strategy would be domain-specific or domain-general depending on the level of awareness the student has of their thinking in context. The role of the lecturer for post-graduate students is thus an important one (Wilson & Conyers, 2016). Schraw (2009) highlights the difficulty of measuring metacognition and suggests that a connection to, and a measurement of metacognitive processes simultaneously does not exist. Tobias and Everson (2002) highlight this point when they explain that observation and self-report tools are insufficient for measuring metacognition. In short, there is no single tool that can measure the many facets of metacognition (Akturk & Sahin, 2011). In support of this perspective, a study by Ndethi, (2017) found that engineering students were not able to adequately describe their strengths, weaknesses and strategies regarding their metacognitive awareness, which indicates that there is a need for greater focus on metacognition as a process of learning.
From an adult perspective, it would be interesting to understand how thinking is habituated over time (Piaget’s schemata) and if certain patterns of thinking develop as a consequence of these habituated patterns, and thus how aware an individual is of these patterns with a view to changing them should awareness increase. Research on metacognition for adults focuses on mnemonic cues and memory, utilising heuristics that allegedly operate below consciousness, in order to yield a subjective feeling (Koriat, 2008). This raises similarities with the children’s memory effort heuristic in raising judgments of learning, which leads to the question: at what age do children recognise this heuristic and the inverse result that the more time they spend learning an item, the less likely they are to recall it later? In other words, easily learned items are better remembered than items the child found difficult to learn (Koriat, 2008). In the study by Ndethi, (2017), she found that when a post-graduate student attempted to describe their strategy for learning, it seldom reflected their perceived weakness in the subject. If one cannot adequately describe their weakness, how can one expect to describe the relevant metacognitive strategic remedy? For this, the emphasis on learning must be overlooked, and instead, an emphasis on an individual’s potential habituated thinking style could be the observation. This could potentially revolutionise metacognitive development (rather than teaching) in universities.
Specifically about General Domains
There is an ongoing debate regarding to what extent metacognition is a general or a domain-specific phenomenon (Veenman and Spaans, 2005; Veenman and Verheij, 2003; Schraw and Nietfeld, 1998; Schraw, 1997; Schraw et al., 1995). It would seem that there are obvious specific domains and obvious general domains, and that some will not cross over, such as writing an essay does not depend on knowing Pythagoras’ theorem. Otero (1998) discusses ways in which metacognition works within science, while Carr and Biddlecomb (1998) discuss the role of metacognition in mathematics. The domain-general hypothesis (Schraw, Dunkle, Bendixen, & Roedel, 1995) assumes that metacognitive knowledge is qualitatively different from other kinds of knowledge within a domain and might span multiple domains in a way that domain-specific knowledge does not. To continue the question of awareness and intention, a person’s self-awareness in the moment is meta-to the tacit knowledge within any domain. One can be aware or unaware of one’s intention in any domain, and as such, the awareness of this awareness (meta-awareness) does not affect and is not impacted by the domain. Finally, Schraw (1998) describes metacognition as a multi-dimensional set of general skills, rather than being domain specific.
Critically Thinking about Metacognition
Flavell (1979) and Martinez (2006) suggest that critical thinking should be subsumed under metacognition. Flavell argues that the definition of metacognition should include critical thinking: “critical appraisal of message source, quality of appeal, and probable consequences needed to cope with these inputs sensibly” (p. 910). Martinez defines critical thinking as “evaluating ideas for their quality, especially judging whether or not they make sense.” He sees this as one of three types of metacognition: metamemory and problem solving being the other two (p. 697). Kuhn (1999) also likens critical thinking to metacognition.
However, Schraw et al. (1995) see both metacognition and critical thinking as being subsumed under self-regulated learning, which they define as “our ability to understand and control our learning environments” (p. 111). Self-regulated learning requires metacognition, motivation, and cognition, which includes critical thinking (2006). Thus, critical thinking is supported by metacognition to the extent that monitoring the quality of one’s thought makes it more likely that one will participate in high-quality (critical) thinking (Sharma and Hannafin, 2004). However, from the literature it should be apparent that Lake’s Stage 2 individual would not be capable of performing this thought process, not only taking into account the Dunning-Kruger effect, but also their lack of complex thinking would limit their capacity to monitor self-thought and the ramifications of this limitation on the interpretation of their behavioural output.
Definitions of critical thinking vary, but common elements include:
Critical thinking also utilises dispositions, including open-mindedness, inquisitiveness, flexibility, perseverance, reason, well-informedness, and the capacity for multiple perspectives (Bailin, et al., 1999; Ennis, 1985; Facione, 1990; Halpern, 1998; Paul, 1992). Critical thinking is often hypothesised as domain-general, in that it can be applied to any subject in any field. However, there is some evidence to suggest the possibility of domain-specificity (Dwyer, Boswell & Elliott, 2015). Individuals who are proficient in a specific domain construct relevant knowledge and thus develop their thinking, albeit within said domain (Chi, Glaser, & Farr, 1988; Kotovsky, Hayes & Simon, 1985); and thus, are unsurprisingly better able to integrate complex information than are those who are not (Pollock, Chandler & Sweller, 2002; Sweller, 2010).
These same experts are using logic rather than intuition (Kahneman & Frederick, 2002) and they are capable of avoiding making simple errors, such as the gambler’s fallacy (which novices are prone to make). The same experts will also perform better on problem-solving, informal reasoning and critical thinking tests specific to their field (Cheung, Rudowicz, Kwan, & Yue, 2002; Chiesi, Spilich, & Voss, 1979; Graham & Donaldson, 1999; Voss, Blais, Means, Greene, & Ahwesh, 1986). It could be argued that the individual’s capacity is greater due to a certain amount of automaticity and a greater awareness of the component parts of their thinking in context, allowing them to evaluate their thinking in the moment or, on the fly.
Expanding on this theme, when teaching critical thinking in the classroom as a way of learning a particular topic, such as English literature, the teacher will offer guidance on how to analyse and evaluate plots, characters and settings so the students can infer writing styles and themes. If the same class of students had a history class together where the teacher chose to teach them facts and dates, the general consensus is, from a metacognition perspective, that the students will become more proficient in the English class than the History class because of the way they were taught, yet not able to transfer these skills to their history class (Ennis, 1989). Critical Thinking skills are “learning objectives, without specific subject matter content” (Abrami et al., 2008). If academia wishes to impart critical thinking skills on students, it must make clear that critical thinking objectives are separate from the course content (Abrami et al., 2008) and allow students to develop their awareness of their thinking in context.
The domain general nature of critical thinking might not be realised until students have been trained in CT purposely, and training in CT skills obviously yields better CT performance (e.g. Gadzella, Dean, Ginther & Bryant, 1996; Hitchcock, 2004; Reed and Kromrey, 2001; Rimiene, 2002; Solon, 2007).
A leading cognitive constructivist, Loewen (2011) states that ‘thought’ rather than action forms the basis of the situation: “We have constructed this situation for ourselves due mainly to the belief in linear history and the telescoped understanding of our mytho-poetic narratives in the West.” [PAGE NUMBER] Loewen is talking about the power we give to the meaning of the words we use in our interaction with the real world, and how our cognitive constructs have greater influence than the interaction itself. Korzybski (1933) was advocating a Constructivist perspective when he warned that the “is” of identity presents a dangerous linguistic and semantic construction that maps to false conclusions. He called it “the Is trap” (1951). Identity as “sameness in all respects” does not exist, and to use a statement such as “that student is lazy” falsely maps reality. Korzybski uses the word ‘unsanity’ to describe the errors of identification in this context. In his book, Science and Sanity (1933), Korzybski tells how one constructs one’s reality based not on what is real, but on our map of our perceived reality, and this model is executed largely unconsciously. From an academic perspective, students are given their new reality by the post-graduate process. They map their existing experience on to it and create a new perception of their reality because of it, which might be ‘wrong’ if based on a construction from an under-graduate experience, especially one in a different country.
Although cognitive constructivism asserts we can predict how relationships form with the outside world, Dweck (2000) argues that it is one’s approach to thinking that allows for a level of freedom and unpredictability, which is useful in the post-graduate process. In testing predictions, the personal construct may align with social or cognitive constructs as one forms the lens for being in the world (Ajzen, 2005; Loewen, 2011). One particular social construct is academia, which has an important social interaction. If one were able to apply specific and intentional methods to the construction of their thinking, it might result in better outcomes for the individual and their colleagues in context (Blonigen et al., 2005). This hints at how contribution and collaboration can take place within the academic community, and recognises such applications also exist within other contexts. Touw, Mejier, and Wubbels, (2015) recognise the need for a balance between logical and creative thinkers. In this thesis, it is argued that generally, people are not aware of their thinking in the moment, and the literature reinforces this perspective to some degree (Hayes, 2015; Kallio, Virta, & Kallio, 2018). The problem lies in the academic context with the educational modelling supported by the concept of social development (Dunn, Griggs, Olson, Beasley and Gorman, 2010). Bodrova and Leong (2001) comment that there is a systemic failure to explore thinking and awareness as students seek to download information instead. Cleary, Callan, and Zimmerman (2012) comment that with awareness comes a higher level of thinking where the individual recognises their awareness and seeks to repeat the action in support of synthesising the thinking process. Dennett (2017) sees this as a powerful leap from previous theory where building upon thinking as a construct is the next step.
The essence of Personal Construct Theory states that applying a lens and filter to the individual’s experience also relates to cognitive ability (Kelly, 1955). Kelly refers to individuals as ‘native scientists’ with their own perceptions of the world. Kelly says that this personal construct, created by the individual, sets the limits for personal construct. Those individuals form personal constructs based upon the filter, and yet how the filter is set can be deemed loose or tight (Kelly, 1955). How open the person remains to experiences will also relate to how loosely or tightly they restrict their perceptions of their world. According to Kelly (1955), constructs also mean a literal and symbolic interpretation of how their context is described. One student will associate their construct of a lecturer with ‘arrogant’ because this is how they describe his or her new under-graduate experience of having a university lecturer, whilst others will have different but equally valid constructs for ‘lecturer’. The concept of loose personal construct in terms of thinking, limits areas of experience where one may not understand his or her relationship to culture, so they miss the inner context of the meaning derived from culture which means they do not reach that level of thought or analysis about culture (von Glasersfeld, 1991). Once the parameters by which the construct is created are brought into awareness, for example: white, male, wealthy, but not necessarily consciously, the constraints on the information processing are changed. Kegan and Lahey (2001) comment that language also sets into place confines or ways people place value to constructs. Those with a tight view will think with precision and seek situations where constructs are black and white. Those within the tight confines also think within strict rules and concepts where there is little value attributed to uncertainty or creative thoughts. They believe X + Y will always equal Z, and differences of perception or opinion of others will not modify this belief (Kegan, 1994; Laske, 2009; Berger, 2002). It is generally found at the lower stage of Kegan’s or Laske’s systems as it demonstrates that the student has a limited meaning-making due to their fixed perspective. Kegan’s ‘strict rules’ links to Kelly’s (1955) personal construct theory of personality where ‘habitual categories’ play a profound role in structuring everyday experience suggesting that the process of meaning-making does not follow from personality, it is personality: “Cognitive processing tendencies may predict daily emotion and behaviour even in the absence of correlation or interaction with traits.” (Robinson 2007: p353).
Using the REP Test study devised by Kelly (1955), Crockett investigated the complexity of a person’s construct system. The results showed, in simple terms, that the older the individual became, the more abstractly and complexly they were capable of thinking (Crockett, 1982). This suggested to Kelly that a person’s degree of complexity could determine their capacity to apply personal constructs to others. In other words: people with high cognitive complexity are more able to see variety amongst people’s thinking and are better-able to predict their behaviour (Crockett, 1982). On the other hand, those with low cognitive complexity are more likely to place people in one of two categories, incapable of seeing the variety.
From an academic perspective, studies of under-graduate students in the United States found that those with high cognitive complexity were lower in anxiety and instability and also were inclined to demonstrate more than the conventional five factors of personality (OCEAN). Conversely, those with lower cognitive complexity displayed fewer than the five factors, implying they are less complex emotionally (Bowler, Bowler, & Phillips, 2009; Lester, 2009). In Kelly’s theory, cognitive complexity is the more desirable and beneficial cognitive style as the more complex one’s thinking, the more able one is to predict behaviours in others. However, this raises the question of whether ‘less’ or ‘more’ complexity is in fact, two “styles” of thinking. Taking Kelly’s perspective onboard, being able to anticipate or predict what others do gives us a guide for our own behaviour. Continuing in academia, a study of first-year under-graduates in Canada found that those with higher cognitive complexity accommodated the pressures of college life better than those who scored lower (Pancer, Hunsberger, Pratt, & Alisat, 2000). There was even evidence to suggest that a student with more than one cultural influence growing up scored higher in cognitive complexity than those with only one (Benet-Martinez, Lee, & Leu, 2006). Attributional complexity is where an individual attributes another’s behaviour to more complicated and sophisticated causes, and is defined as: the extent to which people prefer complex rather than simple explanations for social behaviour. People who score highly in attributional complexity have demonstrated greater empathy and understanding toward other people (Foels & Reid, 2010; Reid & Foels, 2010). However, paradoxically, the assumption of complexity is a less-complex perspective as it is a naïve and simplistic attribution that leaves too much unrecognised.
As Kelly attempted to map personality, he neglected to, or simply did not know he could map or measure awareness or responsiveness. How would he cater for awareness and choice in his REP system? For those exhibiting attributional complexity, the question of choice is also important. Their attribution could be a fundamental misinterpretation of the thinking and behaving of the other person if they are lower on the social-emotional and cognitive complexity scales of Kegan and Laske, which immediately diminishes the findings of Foels & Reid (2010). These would be interesting avenues for investigation, to determine if what Foels and Reid understand complexity to be is the same as what Kegan and Laske understand it to be, as it appears that high scores in their interpretation would be average scores for Laske. They also do not investigate an individual’s intention or awareness of their complexity, and thus do not measure a person’s capacity to choose their response in the moment.
From the personal construct, identity is formed (Bannister and Fransella, 2013). The human condition is predicated upon powerful social interaction (Ajzen, 2005). One would assume this happens because people need and have emotional connections (Huang et al., 2014). However, Huang is pointing to a specific level of development in adult thinking, where a person would see themselves in relation to others and externalise their locus of evaluation. Thinking is happening right now for all people and yet how the thinking process takes shape is directly aligned to the personal construct of one’s psychology (De Fruyt et al., 2006). Academia is a social construct and a social interaction, which suggests that for the post-graduate student, university is not a thinking context, but a ‘being’ context (Burr, Giliberto, and Butt, 2014; Loewen, 2011).
As mentioned, Laske (2009) hypothesises that as we move through the stages and increase our cognitive complexity, we ‘lose’ our previous (constructed) self and could potentially suffer from the perceived loss. However, this also allows us to pursue our humanity as we construct our new self at a higher level (Loewen, 2011). From a post-graduate perspective, thinking is not learning, but rather a present state of ‘being’ (Mischel and Shoda, 1995). Thinking in this context, does not contribute to the student’s personality. It would be interesting to understand the extent to which each post-graduate student constructs themselves in context and how the context affects their construction.
Emotions Aren’t Real
“Emotions are constructions of the world, not reactions to it.” – Feldman-Barrett, 2017
There is evidence that individuals can be taught to recognise the apparent emotions in photographs of people’s facial expressions (Elfenbein, 2006), however, this is directly refuted by a theory that was established in 2010 by Feldman-Barrett et al, who stated in her book: “How Emotions are Made” that all things are constructed, including emotions. Not emotional intelligence, but actual emotions.
Feldman-Barrett gives the example of meeting a snake in the wild and running away immediately. At no point did she orchestrate the categorisation of the experience to culminate in fear and thus running away. It “just happened” for her (p118). Feldman-Barrett states that the stimulus-response brain is a myth, and that brain activity is prediction and correction, which means we construct emotional experiences outside of awareness, to minimise prediction error, as this fits better with the operation of the brain’s architecture. She ends her point by saying:
“Simply put: I did not see a snake and categorize it. I did not feel the urge to run and categorize it. I did not feel my heart pounding and categorize it. I categorized sensations in order to see the snake, to feel my heart pounding, and to run. I correctly predicted these sensations, and in doing so, explained them with an instance of the concept “Fear.” This is how emotions are made.” (Feldman-Barrett, 2010 PAGE NUMBER)
The brain constructs meaning by accurately predicting and adjusting to incoming information, and to make meaning in this context is to go beyond the information given (Feldman-Barrett, 2006b, 2017). Incoming sensations are sorted so that they are contextually actionable, and thus meaningful, based on prior experience. Feldman-Barrett is saying that the emotion you expect to experience will be experienced as it is constructed using the same neuroanatomical principles for information flow within the brain. This suggests that, to some extent, we are victims of our experience.
The ideas Feldman-Barrett has for emotion regulation tie in with the principles of the cognitive constructivist perspective and social constructionism as outlined in this section, and post-graduate studentship in that there is a cultural agreement on the meaning-making of ‘emotion’ which is perpetuated from the lecture theatre (Feldman-Barrett, 2017). Feldman-Barrett also agrees that the classical view of emotion does not take into account an individual’s intellect in the sense of one’s capacity to articulate their state. Essentially, an individual at the higher levels of Laske’s scale (2008) will have a more nuanced view of their state and thus a better vocabulary to articulate it in the moment. Feldman-Barrett goes on to say:
someone with a limited education might use “anger” to describe five states, whereas a well-read and educated person might have five synonyms at their disposal and are thus able to refine their definitions – (Feldman-Barrett, 2017). PAGE NUMBER
However, the extra synonyms might only be useful if the person with whom one is talking also has five synonyms for ‘anger’. If an individual categorises the sensation as “anger”, they are effectively making meaning that says: “anger is what caused these physical changes in my body”, when in reality, the concept is created by the meaning, and the brain constructs instances of anger. This goes some way towards explaining the lower level automated reactions of individuals as espoused by Kegan & Lahey (1994) and Laske (2008). The thinking has them (Kegan’s Subject/Object): they do not have the self-awareness to create a different response, and Feldman-Barrett is saying this is borne of experience over time.
Feldman-Barrett (2015) uses the word “fingerprint” to describe the collection of bodily movements that represent such nominalisations as ‘sadness’ or ‘anger’. She calls them a unique identifier for that particular state, a neural signature, each one different to the next. This description encompasses the body and mind identifiers for said state and helps to establish the mind/body as one system in symbiotic harmony.
Feldman-Barrett’s explanation of how an emotion occurs is worth including here in full. The reason will be discussed afterwards:
An internal model runs on past experiences, implemented as concepts. A concept is a collection of embodied, whole brain representations that predict what is about to happen in the sensory environment, what the best action is to deal with impending events, and their consequences for allostasis (the latter is made available to consciousness as affect). Unpredicted information (i.e. prediction error) is encoded and consolidated whenever it is predicted to result in a physiological change in state of perceiver (i.e. whenever it impacts allostasis). Once prediction error is minimized, a prediction becomes a perception or an experience. In doing so, the prediction explains the cause of sensory events and directs action; i.e. it categorizes the sensory event. In this way, the brain uses past experience to construct a categorization [a situated conceptualization; (Barsalou, 1999; Barsalou et al., 2003; Barrett, 2006b; Barrett et al., 2015)] that best fits the situation to guide action. The brain continually constructs concepts and creates categories to identify what the sensory inputs are, infers a causal explanation for what caused them, and drives action plans for what to do about them. When the internal model creates an emotion concept, the eventual categorization results in an instance of emotion.
Where Feldman-Barrett’s prediction becomes an experience, there is an argument for the perspective that the experience is determined by the individual’s capacity to create said prediction, and thus the greater one’s ability to predict the ramifications of one’s decisions, the higher one’s self-awareness. If this can only be accomplished retrospectively, then the individual has low awareness. An interesting question arises from Feldman-Barrett’s paragraph above: when an individual predicts an eventuality, is s/he doing so based on emotion or cognition, or both? And does this change once one moves beyond Kegan’s or Laske’s Stage 4?
If we consider Feldman-Barrett’s (2017) perspective that brain activity is prediction and correction, then it follows that the greater one’s awareness of one’s thinking intention in the moment, the more capable an individual is to predict and correct in their immediate response, thus possessing a higher level of self-awareness. Ultimately, Feldman-Barrett’s theory of constructed emotion allows scientists to consider how a human nervous system constructs a human mind using new conceptual tools. It would then be interesting to see how far this theory can extend into the construction of self in context.
Finally, the Constructed Emotions theory described by Feldman-Barrett (2006) offers reason for an intervention in that a post-graduate student whose contextual awareness is predominately ‘External’ in their locus of evaluation could be given cognitive exercises that increase and improve their ‘Internal’ awareness in order to create new synaptic pathways that would result in the physical brain being differently-ordered, thus resulting in future choice in their responding and behaving. It also offers support to a meta-description of data sorting and filtering for thinking and behaving, called meta-programmes, which will be discussed next.
This section has illustrated the myriad approaches to what intelligence is and means to various researchers. It has touched on the meaning-making and how the semantic nature of words impacts the research, as though there is some form of psychological essentialism. The meaning constructed by one researcher becomes cemented in the lexicon of the next generation, and as such, can convince the perceiver that there is some profound reality to the meaning and word within the context of psychology (Barsalou, Wilson, & Hasenkamp, 2010; Medin & Ortony, 1989). The main consequence of this essentialising is that researchers ignore the influence of context.
The pervasiveness of essentialism has shaped the thinking of Western psychology over the last century (Feldman-Barrett, Mesquita, & Smith, 2010). Models of the mind have become fractured as researchers and psychologists have developed an assumption that emotion, memory, the self, attitudes, temperaments, personality traits and more, are different entities with distinct principles and causes (Bruner, 1990). By focusing on a mental state or behaviour in isolation, it is easy to miss its embeddedness in a larger system that contributes to its nature. This idea is reinforced by Gendron & Barrett, (2009) who say that states, traits and behaviours are not entities, but events constructed out of a more basic set of processes. This review is not aiming to discover what causes thinking or feeling, but to discover the variety of ways humans are aware of our thinking in response to the environment in the moment.
A meta-program is the psycho-neurological algorithm that informs how we sort, classify, evaluate and prioritise both internally and externally generated sensory data. It allows us to create a data-diminished map of the world beyond our senses, by which we create a personal perceptual model of that world and by which we seek to navigate it. – Geoff Dowell (2018)
Bateson first explored the idea of “going meta” in Steps to an Ecology of Mind (1972) where he related the “meta move” to almost every human endeavour to uncover the structure of meta communication.
Meta-programmes can be found in the field of Neuro Linguistic Programming (NLP), which is a model of being in the world that was developed from cognitive psychology and linguistics (James and Woodsmall, 1988). NLP was developed in the early 1970s by a computer scientist and a linguist. Bandler and Grinder (1975) respectively, defined NLP as: ‘The study of what works in thinking, language and behaving’ (Knight, 1995). NLP has come under a lot of scrutiny in the past two decades and has been dismissed as pseudo-science by many, due to the lack of peer-reviewed evidence of the efficacy of the techniques used, and a lack of a generally-accepted definition, even though it has the potential for a comprehensive cognitive behavioural approach (Liotta, 2012). In a systematic review, Sturt (2012) found that: ‘the very fact that there is no agreed definition of NLP indicates how little evidence we have of its benefits.’ Sturt concluded: ‘This systematic review demonstrates that there is little evidence that NLP interventions improve health-related outcomes. The study conclusion reflects the limited quantity and quality of NLP research’ (Sturt et al, 2012b, p762). As a result of her review, Sturt could only use 10 out of 1459 NLP citations. The low quality of NLP publication is also an observation of Witkowski in his review of NLP, (Witkowski, 2010). A Delphi Poll is favoured when the views of experts are required, when the subject matter is complex, and a hierarchical structure of opinion is necessary. NLP was included in a Delphi Poll (Norcross, Koocher, & Garofalo, 2006) assessing the opinions of psychologists on what they considered to be discredited psychological methods, with NLP scoring 3.87 where 4=probably discredited. Consequently, ignoring the field of NLP, this review will focus on what is essentially a facet within it, and is thus separate enough to warrant individual attention.
Although many people claim to have developed meta-programmes first, it cannot be firmly established in the literature (Merlevede, 2005), and as such it is believed that meta-programmes came about as Cameron-Bandler in the early 1980’s discovered that sometimes, the NLP techniques she was demonstrating did not work, and the reasons why they did not work (based on how the audience received and sorted the data) formed the original list of meta-programmes (Hall, & Bodenhamer, 2006). Hall and Bodenhamer identify at least 50 meta-programmes within five broad categories in their book, Figuring Out People (1997). Maus, (2011, p23) in his book ‘Forget About Motivation’, renames meta-programmes to ‘thinking preferences’ and defines them thus:
According to the literature, there are some common traits amongst the definition of what meta-programmes are and what they do (See Appendix 2 for a full list):
People use specific language and behaviours when communicating, and when one knows what to look for, meta-programmes can be identified (Cook-Greuter, 1999). If the meta-programmes of two individuals are not matched whilst in conversation, there will be a certain amount of misunderstanding or disagreement (Lawley, 1997). When communication is impaired, it also impairs social interactions, especially within a social context such as academia (McCroskey, 1977). An idea is where a person is predominately ‘Visual’, they will use ‘Visual’ language, such as: ‘I see what you are saying’, but a person who is predominately ‘Auditory’ would use language such as: ‘That rings a bell’. The mismatch in this one meta-programme can cause a mismatch in language, which can be a barrier to communication (Hall, 2000). However, it is not simply a mismatch in communication, as suggested by Hall, but a mismatch in meaning-making. In this regard, this is not an NLP issue, but a developmental issue, in accordance with Laske (2008) and how he differentiates between levels of adult development based on sense- and meaning-making. Within the NLP framework, they are the basic building blocks of our thinking, and the filters we unconsciously use to determine to what we pay attention (James and Woodsmall, 1988: p92), and as mentioned in the previous section, from a cognitive perspective, any conceptual category that requires a human perceiver is valid (Feldman-Barrett, 2017).
From within the NLP arena, meta-programmes determine the form or structure of our thinking, the ‘how we think’ not the what, and they exist at a level that is above, or ‘meta’ to our thinking (Hall & Belnap, 1999). Beddoes-Jones, (1999), refers to meta-programmes as ‘thinking styles’, however this does not correctly define them. Her intention is clear in that she proposes each meta-programme is a thinking style, however, individually they are not styles per se. They are habituated patterns of sorting and prioritising sensory data in relation to response to stimuli, and as such, offer no ‘style’ until they are combined in various ways. The ways in which the 50 Meta-Programmes combine produce very different thinking and behaving outcomes for each person, and as such, a specific combination could be considered a thinking style, as governed by the person’s unconscious intention in the moment. Meta-programmes have been linked to various elements of psychology over time and are not without psychological foundation. See Table 1.4 for examples of important MP’s for post-graduate students.
Table 1.3: Examples of Psychological Foundations of Meta-programmes
Other noteworthy equivalents exist between the notion of ‘schemata’ and the idea behind Meta-Programmes. Rumelhart and Norman (1983) describe a ‘schema’ as a way of guiding our actions. They suggest that we hold schemata generally, from using a PC to tying our shoe laces, and interacting with our children. Piaget (1962) argued that to understanding cognitive development, we must understand schemata as they can change over time as we experience new events. These new situations alter our mental representations of and beliefs about our representation of the world (Korzybski, 1951).
If we consider the alignment of meta-programmes from a neuroscience perspective, investigations of cognitive processes have linked approach motivation [‘Towards’] and avoidance motivation [‘Away From’] with left and right brain activation using limited behavioural measures (e.g., Friedman and Förster 2005a). See Appendix 2 for a full list of meta-programmes.
From a post-graduate perspective, a student who is predominantly Options-oriented will have trouble writing their thesis in a structured way, and they will also struggle to stay focused on one thing at a time, instead preferring to oscillate between topic or subject within their thesis. This has implications on their ability to write but also time-keeping and ‘Procedures’ (being the optimal meta-programme for Ph.D. students).
A supervisor of this student would need to reign in the student’s thinking, get them to focus on a particular task by a particular time (what-by-when approach), and demonstrate their movement along this path. For this to happen successfully, the supervisor would need to be predominantly ‘Procedural’ in their thinking, or at least recognise the pattern within their student early enough to affect change.
Since meta-programmes are unconscious and habituated (Hall, 2005), they will be context specific. However, once they are brought into awareness, they can be re-habituated to become a choice:
‘We often move through the full range of each of the categories of the meta-programmes as we go through our day’ – James & Woodsmall (1988: p97).
Hall (2000) sees meta-programmes as state-dependent and a multi-dimensional holarchy that can be changed and developed over time. By this notion, Hall dismisses the notion that personality is fixed. This is because he sees meta-programmes changing depending on the emotional state of a person in context, especially if a person is experiencing emotional duress at the time (Georges, 1996). If our emotional response is unconscious, then we can say we are subject to it (Kegan, 1992). Development involves moving subjective experience to objective analysis, which is the thrust of Kegan’s (1994) work as mentioned.
The above paragraph suggests that personality is not fixed. This leads to the understanding that personality, as seen through the lens of meta-programmes rather than other methods, will change over time due to a change in one’s meta-programme preferences. If we are products of our environment (as per social constructionism), there will be a difference in our thinking and behaving, from one year to the next (Merlevede, 2003). Brunswik, (1955) emphasized that psychology should pay as much attention to the environment as it does the individual, as this will impact the individual’s construction of self based on their emotional response to it. Or, in Barrett’s terminology, based on their construction of their emotional response to it.
The unique combination of meta-programmes allows each person to create their own model of the world (Hall, 2000). The way this has been described is that ‘the map is not the territory’ (Korzybski, 1951), which means that each person will create a unique map of their intellectual and physical environment according to the building blocks of their thinking, which in turn will be dependent upon how complexly they can interpret their ‘territory’. Aligned to this is Barrett’s (2011) physical brain assertion that it is metabolically efficient to implement an internal model of the world with constructed concepts.
Personality psychologists are increasingly looking at individuals’ ‘characteristic adaptations’ in terms of, for example, ‘values and beliefs’, ‘cognitive schemata and styles’, and ‘coping strategies’—all of which vary within individuals according to context (McAdams and Pals 2007). It could be argued that meta-programmes contribute to these schemata. What is interesting about the unpacking of people’s values, beliefs or cognitive schemata is that it is not often undertaken from the perspective of cognitive complexity.
In a study that aimed to determine if a particular combination of Meta-Programmes could be associated with three distinct psychoneuroimmunology archetypes, Daniels (2010) found that a clear description emerged of each archetype’s profile where different meta-programmes combined to support her hypothesis that running specific combinations of meta-programmes is positively correlated with the development of identifiable pathologies.
A negative (helpless-hopeless) mind-state slows recovery, whereas a positive mind-state speeds up the rate of recovery and reduces secondary complications such as depression and chronic pain. (Robles et al. 2005). This was first realised by Frankl (1959), a psychiatrist and neurologist, who, upon being interned in a concentration camp during World War II, noticed that in his fellow inmates, the development of disease was preceded by a mind-state of ‘despair’. He postulated that one’s immunity might be suppressed by a helpless mindset. This was later proven by Ader & Cohen, (1975) in his experiments on rats using cyclophosphamide and saccharine.
Daniels (2010) determined that mind and body are one system and it is possible to predict the connection using meta-programmes. She asked: is it the emotion that triggers the neurochemical and immune changes? Or could it be the thinking pattern [meta-programme] which gives rise to the emotion that is the trigger? Changes in intention, awareness and thus choice of thinking styles (the combination of MP’s) would also influence changes in behavioural patterns, potentially promoting more optimum lifestyle choices.
However, as valid as the research by Daniels is, her approach to what meta-programmes mean and do, makes the same mistake as other researchers in this field in that, for example, she places ‘Sameness’ in opposition to ‘Difference’ (p15) and refers to them as ‘opposite poles’ as per Hall (2002).
Daniels is demonstrating in her study that the ‘fingerprints’ mentioned by Feldman-Barrett (2010) potentially have a meta-programme representation that can map the behavioural as well as cognitive deconstruction of those fingerprints.
Finally, in her study, Daniels (2010) used the Identity Compass Profile tool to determine her subjects’ meta-programme combinations, as the concrete closed questions provided a detailed report, which met the requirements of standardised data being the output of the questionnaire, thus negating the need to develop a new tool. The use of the Identity Compass to raise a post-graduate student’s awareness of their meta-programmes to improve their metacognition would be a valid learning outcome in its own right (Gunstone, 1994).
From a motivational perspective, it is well established that cognitive control functions impact performance and can be enhanced or impeded by emotion and motivation (see Pessoa, 2009). Importantly, emotion and motivation are not aligned directly (see Harmon-Jones and Gable, 2018 for a review). The ability to voluntarily control attention has been linked to emotional valence (Derryberry and Reed, 2002) which can also enhance memory independent of stimulation (Adelman and Estes, 2013).
With awareness of our self-construct comes the understanding of how we relate to the environment and our relationship with others. With self-reflection comes an enhanced awareness which can expand our thinking capacity and allow for a deeper understanding of our meta-programmes (Linder-Pelz, 2011). However, how do we know our level of awareness and what criteria do we use to gauge it in order to develop it from a meta-programme perspective? The ability to reframe the context of the situation and move from an emotional to logical response sets apart the differing levels of cognitive capacity (Evans and Stanovich, 2013), which is important because in order to be a high level thinker, one must move through emotion into cognition (Laske, 2008), so as not to allow an emotional decision to limit one’s thinking, and thus the literature goes full circle.
Learning Styles or Education Styles?
The phrases ‘learning styles’, ‘cognitive styles’, ‘learning strategies’ and ‘learning skills’ have been used within the literature with very little consistency in meaning (Adey, Fairbrother, William, Johnson & Jones, 1999). In particular, from an educational perspective, ‘cognitive style’ and ‘learning style’ can be seen in the literature to be used interchangeably, causing relative confusion (Furnham, 1995).
‘Cognitive style’ was first used as a descriptive term by Allport in 1937, which he defined as an individual’s innate, habitual or preferred mode of information processing (Cassidy, 2004, p. 420). Further, Messick (1976) defined cognitive styles as “representing consistencies in the manner or form of cognition, as distinct from the content of cognition or the level of skill displayed in the cognitive performance“. By this, he meant that cognitive styles are stable and consistent across various contexts of behaviour. Messick’s definition suggests that cognitive styles stem from underlying personality structure. However, it will be argued here that personality is derived from one’s construction of one’s thinking style, which also impacts how post-graduate students learn. They are used to describe an individual’s habituated thinking, perceiving and remembering (Riding and Cheema, 1991).
The term ‘learning style’ was first embraced when researchers became interested in how style could be applied to academia and organisations (e.g., Dunn, Dunn & Price, 1979; Honey & Mumford, 1986). The theory has gained popularity over the years (Pashler et al., 2009), and has led to a variety of models (Coffield, Moseley, Hall & Ecclestone, 2004). There is very little differentiation in the models (psychometric isomorphism), however, they do differ in their definitions. For example, the Fleming model (Fleming, 2006; Fleming & Mills, 1992) suggests that the four types of learning styles are, visual, aural, read/write, and kinaesthetic, whereas the Dunn and Dunn model (Dunn, 1990; Dunn & Dunn, 1978) suggest the four learning styles are environmental, emotional, sociological, and physical. Salter, Evans, and Forney (2006) found that learning styles seem to be moderately stable over a two-year period whilst investigating the Learning Styles Inventory (LSI) and the MBTI (Myers–Briggs Type Indicator).
Despite the research listed here, there is no empirical evidence to suggest that learning styles are factual (Lilienfeld et al., 2010; Pashler et al., 2009; Willingham, 2009). Pashler et al. (2009) conducted a meta-analysis which found “… at present, there is no adequate evidence base to justify incorporating learning-styles assessments into general educational practice” (p. 105). Similarly, Reiner and Willingham (2010) state, “students may have preferences about how to learn, but no evidence suggests that catering to those preferences will lead to better learning” (p. 35). To lend support to these perspectives, a number of studies found evidence that contradicted the idea of learning styles (Massa & Mayer, 2006; Constantinidou & Baker, 2002).
Learning styles became the umbrella term used to encapsulate cognitive styles and other behavioural factors such as instructional and environmental factors (Riding and Cheema, 1991). However, the phrase learning styles in the literature pertaining to this study refers to the concept of individuals ‘preferring’ to process information in three different ways: Visual, Auditory and Kinaesthetic, which are Thinking Preferences (Maus, 2011), or ‘Meta-Programmes’ (Hall, 2000), which implies they are better-able to process information when it conforms to this preference (Pashler et al., 2009). Pashler et al. (2009) trace the origin of learning styles back to the Myers–Briggs Type Indicator. Allcock and Hulme (2010) argue that the learning styles approach has been influenced by Gardner’s multiple intelligence theory (Gardner, 1991, 1993) by suggesting that teaching instruction should align to a student’s preferred learning style. However, although Fridley and Fridley (2010) also link the proliferation of learning styles to Gardner’s intelligences, they also emphasise intrinsic weaknesses within. The use of the word ‘preferred’ in this context is potentially inaccurate if that learning style is unconscious to a post-graduate student.
Kolb’s (1984, 1985) inventory categorises learners along two axes: a preferred mode of perception (concrete or abstract) and a preferred mode of processing (active experimentation or reflective observation) (Gogus and Gunes, 2011; Pashler et al., 2009; Zacharis, 2011). These axes then produce a grid of four quadrants where learners find themselves: concrete-reflective (divergers who favour feeling and watching), abstract-reflective (assimilators who favour thinking and watching), abstract-active (convergers who favour thinking and doing), and concrete-active (accommodators who favour feeling and doing).
There are a number of concerns with the validity of Kolb’s inventories (Kappe et al., 2009; Martin, 2010). Fridley and Fridley (2010) argue they have very little predictive value and if teaching were to be matched to a student’s learning style, then an increase in learning should be apparent. However, this is not supported in one study where Scott, (2010) demonstrated Kolb’s learning styles inventories to be unreliable in a factor analysis, thus questioning the validity of the constructs. Honey and Mumford’s (1986) Learning Style Questionnaire (LSQ) was developed to address the validity of Kolb’s assessment (Kappe et al., 2009). The LSQ identified four types of learners: activists, theorists, pragmatists, and reflectors. However, factor analyses have also shown the LSQ to have reliability issues (Scott, 2010). Scott (2010) also suggests that those people best-placed to evaluate Kolb’s ideas regard them with great scepticism, and two prominent cognitive psychologists, Reiner and Willingham (2010) went so far as to say learning styles are a myth. For undergraduate students, the psychology textbooks are more reserved in their opinions. Ormond (2012) matched a student’s preferred learning style to the way they were instructed with no discernible impact on their academic achievement. Since learning is not merely receiving information but making sense of it (Brown, Campion & Day, 1981):
…to become expert learners, students must…learn about their own cognitive characteristics, their available learning strategies, the demands of various learning tasks and the inherent structure of the material… As instructors our task should be to devise training routines that will help the student to develop the understanding of the learning situation. (Brown, Campion & Day, 1981) PAGE NUMBER
According to the literature, learning preference is divided up by the sensory channels of Visual, Auditory and Kinaesthetic. This suggests that people prefer information either by looking at it, listening for it or working it with their hands in some way. A combination of these strategies is also available. The way learners process information has led to them being described as ‘serialists’ and ‘holists’ (Pask & Scott, 1972).
According to Pask, serialist learners are step-by-step, linear learners. For example, they tend to have a focused, rather than a wide-ranging view of a subject. Conversely, holists are non-linear, ‘global’ learners who can perceive a body of information as a whole. They are capable of making connections between the concept and application, building connections between topics. For very young children, Carbo, (1996) states that phonics instruction benefits students with analytic and auditory learning styles, whilst students with a more global nature benefit from whole language instruction.
Meta-Learning for Better Learning
In the context of a university education, Brown (2002) established the meta-programme patterns of Accountancy lecturers using his own system for measurement: the MPQ. In 2003, he went on to compare the leading meta-programmes of Accountancy lecturers to those of Accountancy students (Brown, 2003). This helped establish the importance of meta-programmes to the teacher/student relationship and how students perceived the efficacy of their lecturers based on their meta-programme influence.
Brown (2004) discovered that where students differed from their lecturers’ meta-programme preferences, they viewed the quality of the teaching less favourably, even though the lecturer might have been very knowledgeable. It could thus be argued that it is possible to improve students’ educational experience by matching meta-programme preferences, (Lawley, 1997) which contradicts the findings in the section on thinking styles above.
Brown and Graff, (2004) identified positive and negative correlations between meta-programme patterns of undergraduates and their performance in summative assessments. Meta programmes are described by O’Connor and McDermott (1995, p. 79): ‘a description of a set of behaviours that are evoked in a certain context’ which labels them context specific. An assessment tool such as that used by Daniels (2010) tailored to post-graduate students would have the potential to identify those patterns particularly influencing their academic experience and provide clear evidence on which to base future research.
Brown, (2005) discovered that for undergraduate students, the patterns Independent/Co-operative and Through-time/In-time were more significant to higher education than the equivalent patterns in the MPQ: Past/Present/Future and People/Places/Activities/Information/Things respectively. In addition, Seeing/Hearing/Feeling was refined by the addition of Auditory-Digital.
Entwistle and Tait (1990) found that students were more likely to describe teaching as ‘effective’ if it complimented their learning style. Hauer, Straub and Wolf (2005) included nursing students in their study and found that the nursing and speech therapy students were more inclined towards concrete experimentation, whereas the occupational therapists and physiotherapists favoured abstract conceptualisation (Titiloye & Scott, 2001).
Dweck (2000) has demonstrated that a student’s behaviour within education is affected by their beliefs about [their] intelligence. This influenced Brown (2002) who included questions in his MPI relating to beliefs about intelligence, and consequently coined the term ‘metacognitive patterns’ to describe his findings.
As mentioned in previous sections, people do not always learn from experience. Having expertise does not necessarily aid in rooting out false information and this expertise can also be a limiting factor that prevents us from questioning counter-evidence for fear of cognitive dissonance (Eurich, 2018). It can also make individuals over-confident. One study found that managers with years of experience were still unable to give an effective assessment of their leadership capabilities compared to less-experienced managers (Ostroff, Atwater & Feinberg, 2007). Rigas, Carling & Brehmer, (2002) went one step further and discovered that people do not improve their judgement with experience. Another study of more than 3,600 participants at high levels within industry found that they over-valued their skills when compared to the perception of their co-workers around them. However, this lack of self-awareness was countered by those leaders who sought critical feedback from ordinates and subordinates, who were then perceived as more effective by both.
Introspection is assumed to be a developmental tool that promotes a meta-position to one’s thinking and feeling and thus leads to improved self-awareness, as mentioned in the section on metacognition. It is considered a facet of metacognition essential to the process of conscious change (Carver & Scheier, 1998). However, other research would demonstrate this is not the case. In a study by Grant, Franklin & Langford (2002), looking at the Self-Reflection and Insight Scale, they found that an individual’s skills in self-evaluation and their propensity for conscious, rather than automatic self-reflection does not necessarily mean that one is capable of developing clarity of insight.
Another perspective on meta-programmes was noted by Pochron, (2014) who suggested that meta-programmes align to Kegan’s (1994) stages by way of habituated physiological states. Once habituated, these states become installed as meta-programmes, albeit at an unconscious level. Utilising specific meta-programmes outlined in the research, Pochron mapped them individually to Kegan’s stages. However, although Pochron named those MP’s as ‘developmental MP’s’, he missed the Intention and Awareness of the meta-state (implied at Kegan’s higher stages) from a position of choice by the actor, and thus fell short of the full potential of combining meta-programmes to form thinking styles.
What is evident from the literature on self-awareness, from this section and previous, is that an individual’s level of self-awareness is far lower than their own perceptions of it, and the need for external feedback is paramount if one is to grow (Eurich, 2018). Where this corresponds with stages of adult development is in the individual’s starting point for self-reflection. As mentioned, an individual at Kegan’s (1994) Stage 4 (self-authored thinking) will have a more profound understanding of their relationship with their own thinking in context than an individual at Stage 2. For Laske (2008), this would be a more dialectical understanding of the drivers of one’s thinking and behaving in context, as he asks: what is one not seeing that is just as important (ibid. p22)? However, these positions are not innate, and must be shown to the individual through external guidance by a more complex other. A measure of self-awareness from a Meta-Programme perspective could be advantageous not only for post-graduate students, but also the general public, as a springboard for growth.
From the literature on Meta-Programmes it is apparent that they have a multitude of applications (Hall, 2005) and that there have been numerous studies by academics such as Brown and Daniels, that utilises them as individual and collective methods for determining an individual’s intention in context, as well as evidence for their self-awareness. However, there would appear to be the potential to utilise them for more than is currently covered in the literature, and an understanding of how they interact with each other, and the individual’s awareness of this interaction is not measured in any study as yet. From a position of self-awareness, there is evidence in this section that would point to a positive effect on awareness if exposure to the meta-programmes were more prevalent.
Across the literature examined, there are a number of findings that emerge as we look at the way individuals think, think about their thinking and are self-aware of their thinking in context. Hackley (2003) emphasised that if a selected area of study is novel, or new, there will be little research that deals with the topic. Fein & Jordan, (2016) suggest that there is a lack of Adult Development (AD) research in social science, and any work going forward should provide inspiring examples of different approaches using the AD perspective to accomplish interesting and novel research. This literature review has determined the same, and this thesis aims to fulfil this suggestion. Development in this review has focused on the sequential growth in complexity of meaning-making and uncovered a distinction in the studies that have omitted an individual’s intention, awareness, choice and response in the moment. The theories discussed show a transformation process in the organising structures of an individual’s meaning-making, which shows qualitatively different changes (Hoare, 2006).
The initial finding is that the stages of development by the various psychologists are adequately described from a behavioural output, and using a longitudinal study method; however, the specificity of what and how the development occurs is inadequate, especially from a position of an individual’s intention to grow, and awareness in the moment of any growth.
Such explanations as bridging and holding offer little specificity in the area of what is actually the process of growth, and again, offer no real intention behind such growth for the thinker. And although both Kegan (1994) and Laske (2008) offer scales for gauging development, the process of growth is limited to an interview process for Laske, and no feedback at all for Kegan. Linder-Pelz (2010) repudiates this perspective and stipulates feedback is paramount if the individual is to benefit from the information, but again, the specificity of growth steps is omitted.
In general, individuals are not aware of their thinking in the moment, (Touw, Mejier, and Wubbels, 2015) and the literature reinforces this perspective to some degree (Hayes, 2015; Kallio, Virta, and Kallio, 2018). Eurich (2018) also reinforces this finding in her research, which then reinforces the findings of Nelson, Kruglanski and Jost’s (1998) meta-analysis of metacognitive literature, as mentioned. Peacocke (2007) also demonstrated this apparent inability to be self-aware.
There would appear to be room for a new concept that continues Kegan’s ‘Universal On-Going Process of Development’ which builds on its foundations, whilst uncovering a new way of determining one’s developmental level, based on one’s intention in the moment as a result of self-awareness. As this would initially be grounded in academia, for post-graduate students, Bodrova and Leong (2001) comment that there is a systemic failure to explore thinking and awareness as students seek to download information instead. Cleary, Callan & Zimmerman, (2012) comment that with awareness comes a higher level of thinking where the individual recognises their awareness and seeks to repeat the action in support of synthesising the thinking process. This opens the door to questions about how levels of adult development might affect post-graduate students in context.
Where there is a connection between the capability and capacity of the post-graduate student’s thinking and the demands of post-graduate academia, especially in doctoral research, there is little research to link them from an output perspective, or growth perspective for the researcher. When one asks the question: should the process of doctoral research grow the student by virtue of the process, the response is always positive. However, when asked how, in quantifiable terms, the response is less convincing.
If this could be linked to Laske’s Thought Form principle where we access the structure of a post-graduate student’s thinking by using meta-programmes, then following in Daniels’ (2010) footsteps by utilising an existing method to deconstruct a post-graduate student’s thinking via the Identity Compass profile tool, it might be possible to develop a new concept that determines a person’s intention to grow, their awareness of this intention and how this affects their capacity and capability as a thinking post-graduate student using meta-programmes to positively influence their self-awareness. Finally, Karmiloff-Smith, Kuhn and Vygotsky all use the term ‘dynamic’ to explain their theoretical position. This is something worth considering from the perspective of how the meta-programmes can be combined in order to produce a dynamic interaction between thinking and behaving.
Therefore, the primary purpose of this thesis is to begin working on a concept that demonstrates how it would fill the gaps illustrated in the Literature Review summary by asking the question: Is there a conceptual measure of self-awareness in the moment that can be determined by the use of meta-programmes that has not been utilised previously?
The best way to achieve this will be discussed in the Methodology chapter, next.