Open Book Publishers logo Open Access logo
  • button
  • button
  • button
GO TO...
Contents
Copyright
book cover
BUY THE BOOK

Endnotes

© David Gamez, CC BY 4.0 https://doi.org/10.11647/OBP.0107.14

2. The Emergence of the Concept of Consciousness

1 Qualia (singular: quale) is a technical philosophical term that refers to the qualitative or subjective properties of experiences. For exaple, the colour red, the taste of chocolate and the sound of a bell are qualia.

2 See Gamez (2007, Chapter 2) and Lehar (2003) for more detailed descriptions of bubbles of perception. Husserl (1964) has a good analysis of the temporal structures of our bubbles of perception.

3 It might be thought that a bubble of experience is a version of Dennett’s (1992) Cartesian Theatre, in which a homunculus observes the contents of consciousness. This appears to require a second homunculus inside the head of the first, and so on ad infinitum. However, bubbles of experience do not have the same problems as Cartesian Theatres because there is no perception within a bubble of experience. My physical brain perceives its physical environment using electromagnetic waves, etc.; my conscious experience of my body does not perceive my conscious experience of my environment using a conscious experience of electromagnetic waves—there is no transmission of information within a bubble of experience. I have discussed this in more detail elsewhere (Gamez 2007, pp. 47-8).

4 For example, Lucretius (2007).

5 Locke (1997, p. 137).

6 This example is taken from ancient scepticism. Studies have shown that people have different perceptions of bitterness (Hayes et al. 2011) and there is substantial variability in people’s olfactory perception (Mainland et al. 2014).

7 See Galilei (1957) and Locke (1997). This distinction was also developed by the ancient atomists—see Taylor (1999) for a discussion.

8 It might be claimed that honey is sweet in itself and produces sensations of sweetness (or bitterness) through the interaction of its sweetness with our senses. In this case the different properties perceived by different observers would be due to the different ways in which the physical sweetness interacts with their senses. The problem with this proposal is that it is impossible to decide whether the honey is sweet and produces a false bitter sensation when it interacts with Zampano’s senses, or whether it is bitter and produces a false sweet sensation when it interacts with my senses. It is much simpler to attribute all sensory properties to the interaction between the physical world and the sense organs.

9 Locke (1997, pp. 136-7).

10 For example, O’Regan and Noë claim: ‘There can therefore be no one-to-one correspondence between visual experience and neural activations. Seeing is not constituted by activation of neural representations. Exactly the same neural state can underlie different experiences, just as the same body position can be part of different dances.’ (O’Regan and Noë 2001, p. 966). A less radical position can be found in Noë’s later work: ‘A reasonable bet, at this point, is that some experience, or some features of some experiences, are, as it were, exclusively neural in their causal basis, but that full-blown, mature human experience is not.’ (Noë 2004, p. 218).

11 This is formally stated as assumption A4 in Section 4.5.

12 Noë’s (2004) bet that this type of conscious experience is not exclusively correlated with neural activity is a different working assumption that can be experimentally tested. I have discussed this point in more detail elsewhere (Gamez 2014b).

13 When a neuron fires it emits a short electrical pulse known as an action potential or spike. This electrical pulse has an amplitude of ~100 mV, a duration of ~2ms and it can be transmitted to other neurons or passed along the nerves.

14 As Dennett puts it: ‘The representation of space in the brain does not always use space-in-the-brain to represent space, and the representation of time in the brain does not always use time-in-the-brain.’ (Dennett 1992, p. 131). The distinction between space and time in the mind and space and time in the objective world was introduced by Kant (1996), who claimed that space and time are forms of intuition. According to Kant it is unknowable whether space and time are present in the objective noumenal world.

15 It might be thought that the traditional primary property of number is an exception. However, number is not a physical property, but the magnitude of a physical property, which is obtained through a measurement procedure and varies with the system of units. For example, we can carry out an act of counting that results in a number, or extract the ratio of two masses as a number. Consider a ball that weighs 7.3 kilos (16.1 pounds): the mass is a physical property of the ball, not the numbers 1, 7.3 or 16.1.

16 Kant’s (1996) metaphysics expresses a similar idea: the noumenal physical world is an invisible source of signals that are processed through the categories to become phenomenal experiences.

17 This use of discrete black boxes to illustrate objects in the physical world is not strictly correct because the boundaries between physical objects depend on the observer’s sensory apparatus and ontology (Gamez 2007, Chapter 5).

18 Russell (1927, p. 163).

19 Many people today have a different interpretation of our bubbles of experience, which is often aligned with idealism and rejects the scientific interpretation of physical reality—Tibetan Buddhism is one example. This book will not examine these other interpretations of consciousness and the physical world.

20 ‘Consciousness’ is sometimes used to refer to an individual person’s consciousness and sometimes used as a mass term to refer to all of the consciousness in existence—just like ‘water’ is used to refer to all of the water in existence. ‘What is consciousness?’ and ‘What is water?’ treat consciousness and water as mass terms. In this definition I have tried to limit the ambiguity by linking a state of a consciousness to a state of a bubble of experience.

21 Galilei (1957, p. 274).

22 Suppose a person’s bubble of experience contains three objects, P, Q and R. P appears with intensity 0.7, Q appears with intensity 0.8 and R appears with intensity 0.3 (these values are purely illustrative). According to the proposed definition, this person’s overall level of consciousness would be the average of these intensity values: (0.7+0.8+0.3)/3 = 0.6.

23 This is similar to our use of ‘awake’, except a person can be awake without having a bubble of experience. For example, vegetative state patients are presumed to be unconscious, but they can have cycles of wakefulness in which they open their eyes and move their body in meaningless ways (Laureys et al. 2002).

24 Metzinger (2003) has a good discussion of online and offline conscious experience.

25 Functional connectivity (a deviation from statistical independence between A and B) is typically contrasted with structural connectivity (a physical link between A and B) and from effective connectivity (a causal link from A to B)—see Friston (1994; 2011). A number of algorithms exist for measuring functional and effective connectivity.

26 The research on change blindness, attentional blindness and change detection in peripheral vision suggests that the amount of online conscious content is less than we think (Cohen and Dennett 2011; Rensink et al. 1997; Simons and Chabris 1999; Simons and Rensink 2005).

27 Wilkes (1988b, pp. 16-7).

28 Wilkes (1988b, p. 38).

29 Lucretius (2007) claims that the soul (a combination of spirit [anima, the vital principle] and mind [animus, the intellect]) is a subtle particle. See Chapter 6, Footnote 10.

3. The Philosophy and Science of Consciousness

1 See Husserl (1960).

2 For example, Smart (1959).

3 See Section 2.1.

4 For example, piano-playing pigs are unlikely to have entered the Aztecs’ imaginations.

5 See Nagel (1974).

6 Other problems with thought experiments and imagination have been discussed by Wilkes (1988a) and Gamez (2009). The Stanford Encyclopedia of Philosophy has a good overview (Brown and Fehige 2014).

7 McGinn (1989, p. 349).

8 Metzinger (2000, p. 1).

9 The quote by McGinn at the beginning of this section is a typical description of the hard problem of consciousness. Chalmers (1995b) made a popular distinction between easy and hard problems of consciousness. Strawson (2015) gives a historical overview.

10 This example has been simplified. If the brain-imaging device showed the complete state of my brain on the screen, then the conscious experience of p1 would be linked to everything that was going on in my brain, both consciously and unconsciously—it would not just be the pattern associated with my conscious experience of the ice cube. Multiple experiments would be required to identify and selectively display the brain activity that was linked to my conscious experience of the ice cube.

11 To make the text easier to read I have separated conscious experiences of brains from other conscious experiences. However, our conscious experience of a brain is a conscious experience. So in this example, some of the brain patterns on the screen will be associated with our conscious experience of the brain patterns on the screen.

12 Our finite cognitive capacities (long and short term memory, etc.) will limit our ability to learn associations between conscious experiences of brain patterns and other conscious experiences. For example, it is unlikely that we will be able to learn all of the details of a complex brain pattern.

13 Rorty (1980, p. 71).

14 I have told this story from the perspective of consciousness. It can also be told in terms of physical brain activity. If we knew enough about the brain, we could describe how it learns to associate sensory stimuli from brains with other sensory stimuli.

15 Elementary wave-particles and superstrings are only put forward as examples. Future advances in physics might explain the behaviour of elementary wave-particles and superstrings in terms of brute regularities at a lower level.

16 Boyle’s law is a good example of a scientific law that can be explained in terms of regularities at a lower level. It states that the pressure of a gas is inversely proportional to its volume in a closed system at constant temperature. This macro-scale experimental observation can be explained in terms of the behaviour of atoms and molecules, which was formerly treated as a brute regularity that was the starting point for scientific explanations.

4. The Measurement of Consciousness

1 Chalmers (1998, p. 220).

2 Descriptions of consciousness can be interpreted as statements about the physical world. When I report that I have a conscious experience of a rusty helmet beside my conscious experience of my left foot, I am also reporting that there is a rusty helmet beside my left foot in the physical world.

3 Wittgenstein (1969) discusses how our knowledge is underpinned by a framework of certainties that cannot be doubted without putting everything into question.

4 When we imagine different motor tasks, such as walking around a house or playing tennis, we activate different brain areas that can be discriminated in a fMRI scanner. This enables people to answer yes/no questions about their consciousness by imagining that they are performing one of two actions. This method has been used to communicate with patients in vegetative or minimally conscious states, who were incapable of other forms of voluntary behaviour (Monti et al. 2010; Owen et al. 2006).

5 This list of behaviours includes suggestions from Shanahan (2010), Koch (2004) and Teasdale and Jennett (1974).

6 Post-decision wagering is a method that is used to measure consciousness in psychology (Persaud et al. 2007). A person is asked to make a decision and to bet on the accuracy of that decision. It is assumed that the person will bet more money on decisions that are based on conscious information. See Sandeberg et al. (2010) for a comparison of post-decision wagering, the perceptual awareness scale and confidence ratings.

7 An overview of some of the techniques for measuring consciousness is given by Seth et al. (2008).

8 Damasio (1999, p. 6).

9 An overview of binocular rivalry is given by Blake (2001).

10 This is a simplified summary of the large number of experiments that have been carried out on visual masking and non-conscious perception. For example, Dell’Acqua and Grainger (1999) showed that unconsciously perceived pictures influenced subjects’ ability to consciously name pictures and categorize words. Schütz et al. (2007) showed that masked prime words can influence how subjects complete gap words. Merikle and Daneman (1996) played words to patients under general anaesthesia and found that when they were awake they completed word stems with words that they had heard non-consciously. A change in the skin’s conductivity is known as a galvanic skin response, which can indicate that information is being processed unconsciously (Kotze and Moller 1990). Öhman and Soares (1994) showed that subjects’ skin conductance response changed when they unconsciously perceived phobic stimuli, such as pictures of snakes or spiders. A review of experimental work on visual masking and non-conscious perception is given by Kouider and Dehaene (2007).

11 This is known as forced choice guessing. While some people believe that above chance results on a forced choice guessing task demonstrate that conscious information is present, blindsight patients can guess the identity of visual stimuli above chance while reporting no subjective awareness (Weiskrantz 1986). Seth et al. (2008) discuss these issues.

12 By ‘associated’ it is meant that consciousness is linked to a platinum standard system, but no claims are being made about causation or metaphysical identity.

13 The metre used to be defined as one ten-millionth of the distance from the Earth’s equator to the North Pole at sea level. Since this was difficult to measure, a platinum-iridium bar was used instead. Rulers were directly or indirectly calibrated against this bar, which was kept in Paris.

14 If the platinum-iridium standard metre doubled in size, an object that used to be 1 metre long (1 platinum-iridium standard metre bar) would have a new length of 0.5 metres (0.5 platinum-iridium standard metre bars). This would only be strictly true if the platinum-iridium bar was the actual definition of the metre, rather than the working definition. The same argument applies to the actual definition of the metre.

15 Functional connectivity (a deviation from statistical independence between A and B) is typically contrasted with structural connectivity (a physical link between A and B) and from effective connectivity (a causal link from A to B)—see Friston (1994; 2011). A number of algorithms exist for measuring functional connectivity (for example, mutual information), and it can be measured with a delay.

16 While phenomenal consciousness and access ‘consciousness’ might be conceptually dissociable (Block 1995), the idea that non-measureable phenomenal consciousness could be present during experiments on consciousness is incompatible with the scientific study of consciousness. Block’s non-accessible phenomenal consciousness does not appear in c-reports, so everything that Block has ever written or said about it is meaningless or false.

17 A possible exception to this would be a situation in which non-reportable consciousness is present but does not interfere with our ability to identify the correlates of consciousness. This is discussed in more detail in Chapter 9, Footnote 13.

18 This is similar to Block’s (2007) idea of cognitive accessibility.

19 Dennett questions the idea that there is a single stream of consciousness with a fixed content: ‘the Multiple Drafts model avoids the tempting mistake of supposing that there must be a single narrative (the ‘final or ‘published’ draft, you might say) that is canonical—that is the actual stream of consciousness of the subject, whether or not the experimenter (or even the subject) can gain access to it.’ (Dennett 1992, p. 113). Personally I do not find Dennett’s arguments for his multiple drafts model convincing. We will find out if he is correct, because it will be impossible to obtain systematic stable measurements of consciousness.

20 Panpsychism is the view that all matter is linked to consciousness. For example, some versions of panypsychism claim that individual electrons, quarks, etc. are associated with simple bubbles of experience.

21 A2 is also likely to be incompatible with Zeki and Bartels’ (1999) proposal that micro-consciousnesses are distributed throughout the brain.

22 The perceived colour of an object does not just depend on the frequencies of the electromagnetic waves that are reflected, transmitted or emitted by it. Our visual system also uses the spectrum of illuminating light and the colour of surrounding objects to identify an object’s colour, which enables us to attribute the same colour to objects under different lighting conditions. I have used electromagnetic wave frequencies to simplify the presentation of the colour inversion argument, which also applies to a more accurate account of colour perception.

23 There are likely to be subtle behavioural differences between two colour-inverted people—see Palmer (1999) for a discussion of behaviourally equivalent inversion scenarios. These differences would disappear if completely different sets of ‘colour’ experiences were linked to frequencies of electromagnetic waves.

24 If a set of properties, A, supervenes on another set of properties, B, then it is impossible for two things to have different A properties without also having different B properties. This is not a causal relationship.

25 Kouider et al. (2013) and Dehaene (2014) discuss infant consciousness.

26 Animal consciousness is discussed by Dehaene (2014), Edelman and Seth (2009) and Feinberg and Mallatt (2013).

27 People with Anton-Babinski syndrome are blind, but claim that they can see and confabulate to cover up the contradictory evidence. Other anosognosia patients are completely paralyzed on one side, but claim that their body is working perfectly.

28 Section 2.4 discusses theories that link consciousness to sensorimotor interactions between the brain, body and environment. In previous work I made the assumption that the awake normal adult human brain is a platinum standard system (Gamez 2011; Gamez 2012a). The more developed account of c-reports presented in this book makes the assumption that the brain is awake unnecessary—immobility, unresponsiveness, etc. are c-reports of zero consciousness.

29 Although brain-damaged patients have played an important role in consciousness research, they should not be uncritically assumed to be platinum standard systems. There can be ambiguities about whether the damage has knocked out the memory and reporting functions and left the consciousness intact, or knocked out the consciousness and left the memory and reporting functions intact. For example, locked-in patients are thought to be fully conscious, but they are only capable of moving their eyes, and some of the patients studied by Owen et al. (2006) and Monti et al. (2010) are likely to be conscious but unable to display this in their external behaviour. The use of brain-damaged patients in consciousness research has the further problem that the damage is typically non-localized and some brain areas are likely to perform several different functions. One way of addressing this issue is to assume that brain-damaged patients are platinum standard systems on a case-by-case basis, taking the details of the damage into account and its likely impact on memory and/or reporting.

A similar ambiguity applies to the use of anaesthetics in consciousness research. For example, midazolam, xenon and propofol are used to induce unconsciousness, so that scientists can compare the state of the conscious and unconscious brain (Casali et al. 2013). This raises the question whether the anaesthetic completely removes consciousness, or just paralyzes the body and prevents the subject from remembering and reporting their consciousness. This issue can also be addressed on a case-by-case basis. We can examine the mechanism of each anaesthetic and decide whether it is likely to affect the areas linked to memory and/or reporting. Normally functioning adult human brains containing anaesthetics that do not affect memory and/or reporting can be assumed to be platinum standard systems.

Animal experiments can also be handled on a case-by-case basis. We can assume that the brains of monkeys or mice are associated with consciousness, so that we can use these animals in consciousness research.

30 The notion of a minimal set is intended to exclude features of the brain that typically occur at the same time as consciousness, whose removal would not lead to the alteration or loss of consciousness. For example, a CC set might have prerequisites and consequences (Aru et al. 2012; de Graaf et al. 2012) that typically co-occur with consciousness, but the brain would be conscious in exactly the same way if the CC set could be induced without these prerequisites and consequences.

31 This is similar to Chalmers’ (2000) definition of the total correlates of consciousness, which he distinguishes from the core neural basis: ‘A total NCC builds in everything and thus automatically suffices for the corresponding conscious states. A core NCC, on the other hand, contains only the “core” processes that correlate with consciousness. The rest of the total NCC will be relegated to some sort of background conditions required for the correct functioning of the core.’ (Chalmers 2000, p. 26). Block (2007) makes a similar distinction.

32 This will not be correct if some spatiotemporal structures can inhibit consciousness. For example, we might have a CC set, cc1, that is a correlate of consciousness according to D5. In most circumstances consciousness would be present whenever cc1 was present. However, if consciousness was inhibited by ih1, then there could be a situation in which cc1 and ih1 were present together and there was no consciousness.

33 Footnote 15 explains the relationship between functional and effective connectivity. These are typically inferred from data using algorithms, such as Granger causality or mutual information, and they are distinct from physical causation, which is discussed in the next section.

34 There are many spurious correlations—for example, see Vigen (2016). These can be divided into false correlations, which are the result of poor statistical procedures, and true but unlikely correlations that might be due to an underlying cause. When there is a true correlation between A and B it is possible to obtain information about B by measuring A and vice versa (the amount of information that one can obtain depends on the strength of the correlation). In this book I am presenting a framework that is based the assumption that there is a true statistical correlation (functional connection) between consciousness and the physical world. So we can obtain information about consciousness by measuring parts of the physical world and obtain information about the physical world by measuring consciousness.

35 Kim (1998, p. 31).

36 This distinction is taken from Dowe (2000). It is similar to Fell et al.’s (2004) distinction between efficient and explanatory causation. Efficient causation is concerned with the physical relation of two events and the exchange of physically conserved quantities. Explanatory causation refers to the law-like character of conjoined events.

37 Predominantly conceptual accounts of causation include Lewis’ (1973) counterfactual analysis and Mackie’s (1993) INUS conditions. Empirical theories based on the exchange of physically conserved quantities have been put forward by Aronson (1971a; 1971b), Fair (1979) and Dowe (2000). Bigelow et al. (1988) and Bigelow and Pargetter (1990) link causation to physical forces.

38 See Dowe (2000).

39 A world line is the path of an object through space and time.

40 If all empirical theories of causation are unworkable, then we might have to limit causal concepts to ordinary language and abandon the attempt to develop a scientific understanding of the causal relationship between consciousness and the physical world.

41 Kim (1998) has a good discussion of the relationship between macro and micro physical laws.

42 Wilson (1999) discusses the minimum amount of physical effect that would be required for consciousness to influence the physical brain.

43 A related point is made by Fell et al. (2004), who argue that the neural correlates of consciousness cannot e-cause conscious states.

44 Controversial experiments by Libet (1985) have indicated that our awareness of our decision to act comes after the motor preparations for the act (the readiness potential). This suggests that our conscious will might not be the cause of our actions, and Wegner (2002) has argued that we make inferences after the fact about whether we caused a particular action. These results could be interpreted to show that CC sets do not e-cause c-reports about consciousness because motor preparations for verbal output (for example) would precede the events that are correlated with consciousness. This problem could be resolved by measuring the relative timing of a proposed correlate of consciousness (CC1 in Figure 4.4) and the sequence of events leading to the report about consciousness, including the readiness potential (R1-R3 in Figure 4.4). If the framework presented in this book is correct, then it should be possible to find CC sets with the appropriate timing relationship. If no suitable CC sets can be found, then the framework presented in this book should be rejected as flawed. It is worth noting that Libet’s measurement of the timing of conscious events implicitly depends on a functional connection between consciousness and c-reporting behaviour—the relative timing of consciousness and action can only be measured if consciousness is functionally connected to c-reports about consciousness (in this case with a delay).

45 It is reasonably easy to see how the contents of consciousness that are c-reported could be e-caused by physical events. For example, we can tell a simple story about how light of a particular frequency could lead to the activation of spatiotemporal structures in the brain, and how learning processes could associate these with sounds, such as ‘red’ or ‘rojo’. This might eventually enable a trained brain to produce the sounds ‘I can see a red hat’ or ‘I am aware of a red hat’ when it is presented with a pattern of electromagnetic waves. Since consciousness does not appear to us as a particular thing or property in our environment and many languages do not contain the word ‘consciousness’ (Wilkes 1988b), it is not necessary to identify sensory stimuli that the physical brain could learn to associate with the sound ‘consciousness’. The concept of consciousness can be more plausibly interpreted as an abstract concept that is acquired by subjects in different ways (see Chapter 2). So it is conceivable that the scientific study of consciousness could be carried out without subjects ever using the word ‘consciousness’ in their c-reports.

46 See Footnote 15 for the distinction between structural, functional and effective connectivity. Effective connectivity can be measured using algorithms, such as transfer entropy (Schreiber 2000) or Granger causality (Granger 1969), which works on the assumption that a cause precedes and increases the predictability of the effect. However, effective connectivity does not always coincide with e-causation—for example, when a signal is connected to two areas with different delays.

47 In the real brain many areas are reciprocally connected to each other and there is a great deal of recurrent processing. This simplified diagram only shows the general flow of activity from perception to reporting.

48 Cohen and Dennett (2011) illustrate the low resolution of our peripheral vision.

49 Our ability to access high resolution information on demand contributes to our sense that we perceive the world in uniformly high resolution (O’Regan 1992).

50 This is a conservative estimate based on eye-movement driven changes and the assumption that consciousness consists of a series of discrete moments (the specious present). It is also possible that consciousness changes continuously.

51 See O’Regan (1992).

52 People can be trained to make more accurate reports about their consciousness (Lutz et al. 2002) and there has been a substantial amount of work on the use of interviews to help people describe their conscious states. In the explication interview (EI) a trained person interviews a subject about a conscious state to help them provide an accurate report (Petitmengin 2006). In descriptive event sampling (DES) the subject carries a beeper, which goes off at random several times per day. When they hear the beep the subject makes notes about their consciousness just before the beep. This is followed by an interview that is designed to help the subject to provide faithful descriptions of the sampled experiences (Hurlburt and Akhter 2006). Froese et al. (2011) discuss some of the first- and second-person methods for measuring consciousness. These techniques place a heavy reliance on memory, so it is unlikely that they can address the problems highlighted in this section.

53 Shanahan (2010) suggests how an omnipotent psychologist could measure a person’s consciousness by reversing time and carrying out different interventions.

54 Mental techniques could also be used to reset consciousness. For example, people with a high level of mental focus, possibly gained through meditation, might be capable of putting their consciousness into a particular state and maintaining this state for an extended period of time.

55 It might be possible to use what we know about the relationship between a stimulus and consciousness to make reliable inferences about a person’s state of consciousness. Suppose we knew that an awake expectant person always has a conscious experience of a red rectangle when a red rectangle is presented at the centre of their visual field. If this inference was reliable, it might not be necessary to measure their consciousness using c-reports when we expose them to a red rectangle—we could simply infer that since they are looking at a red rectangle, they must be conscious of a red rectangle. However, the limited resolution and active nature of the visual system means that a complex model will be required to map between stimuli and conscious states. Furthermore, this method of inference can only be developed by measuring consciousness using c-reports, which depends on the assumptions that have been presented in this chapter.

56 Nagel (1974, p. 449).

57 You might think that you could validate the descriptions by resetting your consciousness to the state that is being described. But then you would have to compare a remembered description with your current state of consciousness without modifying your current state of consciousness.

58 Formal descriptions of the physical world are covered in Section 5.1.

59 These problems are discussed by Chrisley (1995a) and Gamez (2006).

60 The use of XML to describe consciousness is discussed by Gamez (2006; 2008a).

61 See Balduzzi and Tononi (2009).

5. Correlates and Theories of Consciousness

1 This is the current definition of a metre.

2 This is our conscious experience of measurement. I could also describe how Randy’s height is measured by the physical brain of the scientist.

3 Eddington (1928, pp. 251-2).

4 This definition of measurement is a simplified version of the one put forward by Pedhazur and Schmelkin (1991), who take it from Stevens (1968). According to Stevens, most measurement involves ‘the assignment of numbers to aspects of objects or events according to one or another rule or convention.’ (Stevens 1968, p. 850). Pedhazur and Schmelkin stress that numbers are assigned to aspects of objects, not to objects themselves. We measure the height, width and colour of a box, not the box itself.

5 For example, if we cannot devise a way of p-describing neurons, then it will be difficult to make inferences about the consciousness of animals with larger neurons, such as snails and insects, and we will not be able to say anything about the consciousness of artificial systems.

6 Intrinsic properties are tied to an object’s physical nature. They are held by an object independently of its spatial and temporal context. Extrinsic properties depend on an object’s relationships with other parts of the world. The chemical composition of a neuron is an intrinsic property. The distance of a neuron from the North Pole is an extrinsic property, which would change if the North Pole changed location.

7 It is conceivable that some CC sets could be 60% correlated with conscious states. Experimental work could determine whether this is the case. C3 will not apply if there are inhibitors of consciousness (see Chapter 4, Footnote 32).

8 Elsewhere I distinguished between type A and type B correlates of consciousness (Gamez 2014c). Type A correlates can e-cause c-reports and are compatible with C4. Type B correlates are not compatible with C4 because they cannot e-cause c-reports.

9 This discussion assumes that there is a 1:1 ratio between CC sets and conscious states.

10 The technologies that are available for measuring the brain are covered in Section 12.2.

11 I have discussed elsewhere how the correlates of consciousness can be experimentally separated out from their prerequisites and consequences and from sensory and reporting structures (Gamez 2014c). Pitts et al. (2014) describe experimental work that attempts to separate the correlates of conscious perception from reporting structures. This is also discussed by Koch et al. (2016).

12 Rees et al. (2002), Tononi and Koch (2008), Dehaene (2014) and Koch et al. (2016) describe some of the research that has been carried out on the neural correlates of consciousness.

13 Any kind of ‘passive’ monitoring or measurement involves the passage of physically conserved quantities from the system to the measuring device. In a natural experiment this is small compared to the exchange of physically conserved quantities within the system, so it does not affect our assumption that the system is a platinum standard.

14 This experiment has been extensively discussed—for example, by Moor (1988), Chalmers (1996a), Van Heuveln et al. (1998) and Prinz (2003). Part of the brain could be replaced by any functionally equivalent system, such as a giant lookup table or the population of China communicating with radios and satellites (Block 2006).

15 We have an intuition that we would notice if, for example, the implantation of the chip removed half of our visual consciousness. But according to the premises of the experiment, our behaviour would be identical, so nothing in our thoughts or speech would reflect this change in consciousness. If the implanted chip did affect our consciousness we would not be cognitively aware of the change and it would not affect our ability to perceive and respond to the world. We would be like people with anosognosia (see Chapter 4, Footnote 27), with the difference that our sight and bodies would be working perfectly, so no external observer could detect the change in our consciousness.

16 It might be argued that neurons die all the time, so surely replacing one neuron with silicon should not affect our assumption that the brain is a platinum standard? And so on with two neurons, three neurons, until the entire brain has been replaced. Chalmers’ (1995a) fading and dancing qualia argument proceeds along these lines. One problem with this argument is that it is based on the invalid assumption that we can imagine the relationship between consciousness and the brain (see Chapter 3). Another problem is that the brain can be sensitive to individual spikes, so the replacement of individual neurons could affect its consciousness. For example, a single neuron could individually encode an abstract concept or make a significant contribution to a population code. If this neuron was part of a CC set, then its replacement with a silicon chip could alter the associated conscious state.

17 The assumption that brains with implanted chips are conscious is equivalent to the assumption that functionalism is true. This brings in all of the problems with computation and information theories of consciousness that are discussed in Chapters 7 and 8.

18 Popper (2002, pp. 279-80).

19 Tononi (2008, p. 217). I also could have quoted Dehaene: ‘Only mathematical theory can explain how the mental reduces to the neural. Neuroscience needs a series of bridging laws, analogous to the Maxwell-Boltzmann theory of gases, that connect one domain with the other. This is no easy task: the “condensed matter” of the brain is perhaps the most complex object on earth. Unlike the simple structure of a gas, a model of the brain will require many nested levels of explanation. In a dizzying arrangement of Russian dolls, cognition arises from a sophisticated arrangement of mental routines or processors, each implemented by circuits distributed across the brain, themselves made up of dozens of cell types. Even a single neuron, with its tens of thousands of synapses, is a universe of trafficking molecules that will provide modelling work for centuries.’ (Dehaene 2014, p. 163).

20 I have discussed the need for c-theories elsewhere (Gamez 2012b).

21 The search for c-theories is closely related to the attempt to discover the relationship between brain activity and behaviour. Computational methods could also be used to study this relationship (see Section 5.6). However, c-theories might be based on non-neural structures in the brain, such as novel materials, haemoglobin and electromagnetic waves (see Section 6.2 and Section 6.3), that would not be required by theories that describe the relationship between neuron activity and external behaviour.

22 ‘Mathematics’ should be interpreted in a broad sense that includes computer algorithms.

23 This example and its intensity values are purely illustrative. More work needs to be done on the conversion of c-reports into c-descriptions that record the intensity of different aspects of conscious experience. This could draw on previous work in psychophysics—for example, Gescheider (1997).

24 This is an unashamedly Popperian approach to the science of consciousness. Some would argue that Popper (2002) presents an outmoded account of the philosophy of science, which should be replaced by Kuhn (1962) at least, or perhaps Feyerabend (1975) or Latour (1987). Some of these later ‘relativist’ ‘constructivist’ ‘postmodern’ accounts reject the possibility of scientific progress altogether. However, if we are attempting to understand how a science of consciousness can be developed, we need a model of what science is. And I would argue that Popper provides a carefully thought out and convincing account of what good scientific practice should be. Other philosophies of science can be used to interpret the science of consciousness, but many of them are considerably less useful as guiding principles than Popper—how (or why) would we develop a science of consciousness based on Feyerabend or Latour?

25 C-theories describing brute regularities have some similarity with Chalmers’ psychophysical laws: ‘Where we have new fundamental properties, we also have new fundamental laws. Here the fundamental laws will be psychophysical laws, specifying how phenomenal (or protophenomenal) properties depend on physical properties. These laws will not interfere with physical laws; physical laws already form a closed system. Instead they will be supervenience laws, telling us how experience arises from physical processes. We have seen that the dependence of experience on the physical cannot be derived from physical laws, so any final theory must include laws of this variety.’ (Chalmers 1996a, p. 127). However, this book suspends judgment about some of the metaphysical substance-based theories, and the relationship between c-descriptions and p-descriptions is symmetrical, not a causal relationship in which consciousness arises from physical processes.

26 For example, Tononi’s (2008) information integration theory is based on his first-person observations about the differentiation and integration of consciousness.

27 Humphreys (2004, p. 90).

28 There is also a more general question about whether one human brain can fully understand another—one might think that a brain could only be understood by a larger and more complicated system. This issue can potentially be addressed by using the world as external memory (Clark 2008; O’Regan 1992). This would only work if our understanding of the brain can be broken down into interrelated modules. For example, we could develop a detailed understanding of how a neuron works, write it down, and then work on a different aspect of the problem, until we had written down everything about the brain. Although the final solution could not be comprehended by a single brain all at once, one or more brains could check the validity of each part and the links between them.

29 A substantial amount of research has been carried out on the use of computers for scientific discovery (Dzeroski and Todorovski 2007). Robotic systems have been developed that can carry out experiments automatically (Sparkes et al. 2010), and there has been research on the automatic discovery of differential equations that describe the behaviour of dynamic physical systems (Schmidt and Lipson 2009). This work suggests how consciousness could be scientifically studied in the future.

30 For example, Billeh et al. (2014) have developed a way of identifying functional circuits in recordings of spiking activity from hundreds of neurons. Using this approach it might be possible to develop a way of describing brain activity in terms of interacting circuits, which could be identified automatically by a computer.

31 The Blue Brain Project has developed detailed models of a cortical column (Markram 2006) and this work is being continued on a larger scale in the Human Brain Project (www.humanbrainproject.eu). Larger, less detailed models have also been built of human and animal brains (Ananthanarayanan et al. 2009; Izhikevich and Edelman 2008). The feasibility of scanning and simulating a human brain is discussed in Chapter 11, Footnote 14. None of the current models generates behaviour that is similar to c-reports and most of them do not include non-neural components of the brain, such as glia.

32 The ‘c-reports’ of a simulated brain could not be used to measure its consciousness because a neural simulation is not a platinum standard system.

33 Simulations are very different from real brains, so this would primarily be a test of the methodology. However, this type of work might lead to c-theories that could be tested on real brains.

34 This methodology could also be used to solve the more general problem of the relationship between an organism’s brain activity and all of its behaviour (both conscious and non-conscious). Once the behaviour had been formally described, computers could be used to discover relationships between the brain activity and behaviour. This approach could be prototyped on a very simple system, such as a simulated C. elegans.

6. Physical Theories of Consciousness

1 The pattern/material distinction captures a useful way of speaking about the physical world at different spatial scales. However, one can also argue that elementary wave-particles are the only material and all other ‘materials’ are patterns in elementary wave-particles. Physical c-theories can be expressed using either interpretation of the pattern/material distinction.

2 A neuron’s distance from the North Pole is a property of the neuron and the North Pole combined—it changes when the location of the North Pole changes. The brain has intrinsic properties that enable it to reflect particular frequencies of electromagnetic waves. The set of electromagnetic waves that is actually reflected depends on the brain’s properties and on the properties of the waves. If electromagnetic waves altered their nature, the brain’s reflectance of electromagnetic waves would change.

3 I have discussed this issue in more detail in a paper that distinguishes between type A correlates of consciousness that meet constraint C4, and type B correlates of consciousness that do not (Gamez 2014c).

4 A quantum theory of consciousness has been put forward by Hameroff and Penrose (1996). Electromagnetic theories of consciousness have been put forward by Pockett (2000) and Macfadden (2002).

5 The potential connection between consciousness and a global workspace was first elaborated by Baars (1988). A number of computational and neural models of a global workspace have been built (Franklin 2003; Gamez et al. 2013; Shanahan 2008; Zylberberg et al. 2010) and a substantial amount of research has been done on the possibility that a global workspace might be implemented in the brain (Dehaene and Changeux 2011).

6 For example, Bartfield et al. (2015) and Godwin et al. (2015) describe functional connectivity patterns that are potentially linked to consciousness.

7 See Gamez (2014b).

8 See Tononi and Sporns (2003), Balduzzi and Tononi (2008) and Oizumi et al. (2014). In a physical c-theory Tononi’s algorithms would connect patterns in a particular material to a conscious state. This relationship would only hold for a specific material—the same patterns in a different material would not be linked to consciousness. This is distinct from the use of Tononi’s algorithms to identify information patterns that are linked to consciousness, which is discussed in the next chapter. Liveliness (Gamez and Aleksander 2011), causal density (Seth et al. 2006) and Casali et al.’s (2013) perturbational complexity index can also be re-interpreted as descriptions of patterns in materials that might be linked to consciousness.

9 A formal description of biological structures is required if CC sets contain biological materials and we want to make predictions or deductions (see Chapter 9) about the consciousness of non-biological systems. For example, a formal description of neurons could help us to decide whether a robot controlled by artificial neurons is conscious.

10 Lucretius’ (2007) theory about the soul is similar to this view. He claims that the soul (a combination of spirit [anima, the vital principle] and mind [animus, the intellect]) is a minute particle:

The nature of the mind and spirit is such it must consist

Of stuff composed of seeds that are so negligibly small,

Subtracted from the flesh, they don’t affect the weight at all.

Nor should we think this substance is composed of one thing, neat,

For from the dying there escapes a slight breath mixed with heat,

While heat, in turn, must carry air along with it; for there

Is never any heat that is not also mixed with air,

Because heat’s substance, being loose in texture, has to leave

Space for many seeds of air to travel through its weave.

This demonstrates the nature of the mind’s at least threefold –

Even so, these three together aren’t enough, all told,

To generate sensation, since the mind rejects the notion

Any of these is able to produce sense-giving motion,

Or the thoughts the mind itself turns over. And so to these same

Three elements, we have to add a fourth that has no name.

There is nothing nimbler than this element at all –

Nothing is as fine as this is, or as smooth or small.

It’s this that first distributes motions through the frame that lead

To sense, since this is first to bestir, composed of minute seed.

Lucretius (2007, pp 78-9)

11 At most people have invoked the known properties of quantum mechanics, which are unlikely to play much of a role (Wilson 1993).

12 Novel materials will not help us to imagine the relationship between consciousness and the physical world. If they are similar to the rest of the physical world, then they will be invisible, and we will be unable to make an imaginative transition from the invisible novel material to conscious experiences (see Section 3.3). If the novel material is more like a spark of consciousness embedded in matter, then we will be able to imagine the material, but we will find it difficult to imagine how it relates to other conscious experiences (see Section 3.4).

13 To make this assumption work it will be necessary to find a way of comparing the strength of patterns in different materials. For example, how can you compare the strength of electromagnetic field patterns (measured in volts) with blood flow patterns (measured in cm/s)?

7. Information Theories of Consciousness

1 Tononi (2008, p. 237).

2 Floridi (2010, p. 1).

3 Floridi uses ‘dedomena’ to describe differences in the physical world that exist independently of us: ‘Dedomena are […] pure data or proto-epistemic data, that is, data before they are epistemically interpreted. As “fractures in the fabric of being” they can only be posited as an external anchor of our information, for dedomena are never accessed or elaborated independently of a level of abstraction […] They can be reconstructed as ontological requirements, like Kant’s noumena or Locke’s substance: they are not epistemically experienced but their presence is empirically inferred from (and required by) experience. Of course, no example can be provided, but dedomena are whatever lack of uniformity in the world is the source of (what looks to information systems like us as) as data, e.g. a red light against a dark background.’ (Floridi 2009, pp. 17-8).

4 This notion of an interface is based on Floridi’s level of abstraction: ‘[…] data are never accessed and elaborated (by an information agent) independently of a level of abstraction (LoA) […]. A LoA is a specific set of typed variables, intuitively representable as an interface, which establishes the scope and type of data that will be available as a resource for the generation of information.’ (Floridi 2009, p. 37). Floridi (2008) describes levels of abstraction in detail.

5 I can also extract the text of Madame Bovary from the DRAM voltages by defining a mapping between 011100100110010101100100 and the complete text of Madame Bovary.

6 A time-indexed interface uses a combination of time and the system’s state to extract information. Suppose an elementary wave-particle shifts between two states: you can interpret the appearance of state 1 at time 1 as ‘r’, the appearance of state 1 at time 3 as ‘e’, and so on.

7 The notion of a custom interface is inspired by discussions about whether physical systems implement finite state automata (Bishop 2002; Bishop 2009; Chalmers 1996b; Chrisley 1995b; Putnam 1988).

8 According to Floridi’s (2009) formulation of the general definition of information, σ is an instance of information, understood as semantic content, if and only if: 1) σ consists of n data, 2) the data are well formed, 3) the well-formed data are meaningful. My earlier work used this distinction to analyze Tononi’s information integration theory of consciousness (Gamez 2011; Gamez 2016). I am indebted to Laurence Hayes for helping me to see that the data/information distinction is unworkable.

9 See Shannon (1948). Tononi’s (2004; 2008) information integration theory of consciousness is based on this interpretation of information.

10 There are several versions of Tononi’s information integration theory of consciousness (Balduzzi and Tononi 2008; Oizumi et al. 2014; Tononi 2004). Tononi (2008) gives a good overview and his book offers a simple introduction without the mathematical treatment (Tononi 2012). Experimental work on the information integration theory of consciousness has been carried out by Lee et al. (2009), Massimini et al. (2009), Ferrarelli et al. (2010), and Casali et al. (2013).

11 Barrett (2014) suggests how Tononi’s information integration theory of consciousness can be interpreted as a physical c-theory.

12 Tononi (2008) suggests that his algorithm could be applied to all possible levels of the brain—the level at which it reaches a maximum would be the one that is correlated with consciousness.

13 It might be objected that if information is subjective, then surely it must be present in the brain? Where else can subjective things be? However, the neural mechanisms (and electromagnetic fields etc.) that are active when our brain applies an interface to a physical system are purely physical processes—they do not have special informational properties that are absent from the rest of the physical world. These physical mechanisms are associated with bubbles of experience in which colours, abstract concepts and 1s and 0s appear. The presence of 1s and 0s in consciousness does not prove that there are 1s and 0s in our physical brains, any more than the presence of red in consciousness proves that there is red in our physical brains.

14 It could be argued that an observer has to exchange physically conserved quantities with a system to read its state. This issue can be avoided if the observer applies an interface to emissions from the system, such as light patterns from a screen.

15 An information c-theorist might argue that a material implementation of information has e-causal powers. The material holds the pattern of information and this pattern affects future states of the physical system. However, in this case the material must be considered to be implementing every possible information set that can be read from the system. Some of these are contradictory or have no relationship with each other. It is implausible to claim that a potentially infinite collection of disparate information sets are present in the material and lead to its state transitions.

16 The system could also extract information about an earlier state of itself.

17 I have discussed this experiment elsewhere (Gamez 2016). It would only work if the problems with custom-designed interfaces could be addressed.

18 For example, Tononi’s information integration algorithms might be able to identify neuron firing patterns that are linked to consciousness. If this was a physical c-theory, these patterns would not be associated with consciousness when they occurred in other materials.

8. Computation Theories of Consciousness

1 Kentridge (1994, p. 442).

2 The MONIAC was a water computer that was developed by Bill Phillips to model the UK economy.

3 The solution is not necessarily optimal. A video can be found here: www.youtube.com/watch?v=dAyDi1aa40E

4 This discussion sets aside issues about time slicing and parallel processing, which do not affect the central argument. When a general-purpose computer is parallel processing it is executing multiple special-purpose computers simultaneously. When a general-purpose computer is time slicing it is working as a particular special-purpose computer for short periods of time.

5 This used to be a common practice—see Grier (2005).

6 Some of the computations that might be members of CC sets are discussed by Cleeremans (2005). Jackendof (1987) and Bor (2012) set out computational c-theories and Metzinger (2003) gives informational-computational interpretations of his constraints on conscious experience.

7 Computation c-theories are similar to a philosophical position known as functionalism, which claims that functions are the sole members of CC sets. Putnam (1975) was one of the key advocates of this position, which he later abandoned; Shagrir (2005) gives a good overview.

8 Standard orreries do not include the date—this could easily be added.

9 Strictly speaking, information cannot be stored in the physical world. To store information we make changes to the physical world that are defined by an interface. At a later point in time we access the same part of the physical world through the same interface and the information reappears.

10 Horsman et al. (2014) give a good description of this interpretation of computing.

11 This is based on the first example in Section 7.1.

12 This is true of any physical system because an interface can be custom-designed to extract a given sequence of information states from any sequence of physical states (Putnam 1988). The simplest method would be to map unique states onto the required information, or a clock could be used to handle repeated physical states. This mapping can only be done retrospectively unless one has a good predictive model of how the system’s physical states will change.

13 For example, it has been claimed that everything is a cellular automata (Wolfram 2002; Zuse 1970) or that physical reality arises from the posing of yes-no questions—Wheeler’s (1990) ‘It from bit’ hypothesis.

14 This theory of implementation will have to map spatiotemporal physical structures onto computations. It cannot be based on information, which only exists relative to a human-defined interface.

15 Putnam (1988) and Bishop (2002; 2009) discuss the problems with finite state automata.

16 I have described the problems with combinatorial state automata in a paper (Gamez 2014a) that raises more general problems with theories of implementation.

17 Piccinini (2007) puts forward a theory of implementation based on string processing.

18 Theories of implementation based on cellular automata have been put forward by Zuse (1970), Wolfram (2002) and Schule (2014). Piccinini (2012) gives a good overview of different theories of implementation and their problems.

19 Computational and functional concepts can be useful ways of describing physical correlates of consciousness. Suppose someone claims that consciousness is linked to a global workspace in the brain (Baars 1988). If this was interpreted as a physical c-theory, the global workspace would just be a convenient way of describing a pattern in spiking neurons. The global workspace would not form a CC set by itself and it would not be correlated with consciousness if it was implemented in a different physical system.

9. Predictions and Deductions about Consciousness

1 Popper (2002, p. 18).

2 Research on change blindness suggests that we cannot accurately recall earlier conscious states. See, for example, Simons and Rensink (2005).

3 I am indebted to Ron Chrisley for this suggestion. To convert a c-description into a virtual reality file (for example, an X3D XML file) it is necessary to model the connection between virtual environments and states of consciousness. This will have to take the limited resolution of the senses and the active nature of the visual system into account. There will also be a one-to-many mapping between a c-description of a conscious state and virtual environments that could produce this state. This method of validating consciousness is limited to online consciousness that is evoked by sensory input. Many aspects of consciousness, such as body states and emotions, are difficult to control with virtual reality technology.

4 This is similar to the approach that is used in experiments on brain reading. For example, in one set of experiments by Nishimoto et al. (2011) the subjects watched a video while their brains were measured. The scientists used this data to build a model of the spatiotemporal structures in their brains that were activated by the video. This model was then used to reconstruct the video that the subjects were watching from their brain activity. The subjects could compare the consciousness that they had when they watched the reconstructed videos with the consciousness that they had when they watched the original video.

5 This testing method is easier if there is a 1:1 relationship between conscious states and physical states. Otherwise, the c-theory will map each conscious state onto multiple potential physical states.

6 There has been much speculation about whether a head remains conscious after it has been cut off. Dash (2011) discusses some of the early experiments that were carried out on humans. One study on rats suggests that they retain consciousness for several seconds after decapitation, and a wave of potentially conscious activity occurs approximately 50 seconds later (van Rijn et al. 2011). The EEG traces of dying humans show a similar pattern on a longer time scale (Chawla et al. 2009).

7 This is the classic problem raised by Nagel (1974) about what it is like to be a bat. From the perspective of this book, this is not a problem with the irreducibility of subjective experience, but with our limited ability to transform our bubble of experience into a different bubble of experience. There is no philosophical problem about deducing a c-description of a bat’s consciousness from a p-description of its physical state—just a problem with our ability to imaginatively comprehend the c-description we have generated.

8 One potential solution would be to create virtual reality environments that enable us to experience aspects of a bat’s consciousness. Alternatively Chrisley (1995a) has suggested how we could use robotic systems to specify the non-conceptual contents of a bat’s consciousness. This approach has been demonstrated by Chrisley and Parthemore (2007), who used a SEER-3 robot to specify the non-conceptual content of a model of perception based on O’Regan and Noë’s (2001) sensorimotor theory.

9 This will only be possible if we have a flexible and general c-description format (see Section 4.9).

10 For example, deductions could help us to breed or genetically engineer food animals that are not conscious.

11 D could also be a constant pattern or a partially correlated pattern (see Section 6.4).

12 The concept of similar physical contexts needs to be worked out in detail. Normally functioning adult human brains have a great deal of variability in their patterns and materials, so a statistical definition of the normal variability in their patterns and materials is required to precisely define a physical context.

13 Conservative deductions could be made about unreportable consciousness in a platinum standard system during an experiment on consciousness. Suppose a brain contains two identical structures, one connected to c-reports and one not. If these structures were always present together, pilot studies would identify their union as the correlate. However, if the structure that was disconnected from c-reports came and went intermittently, then we could exclude it as the correlate. At a later point in time when both structures were present we might deduce that there are two consciousnesses in the platinum standard system, only one of which is reportable. This would violate assumptions A1, A2 and A6. However, we would still need A1, A2 and A6 to identify the correlate that was used to make the deduction. An example of this type of reasoning can be found in Lamme (2006; 2010), who uses paradigmatic cases of reportable consciousness to establish the link between consciousness and recurrent processing, and then makes inferences about the presence of inaccessible phenomenal consciousness.

14 In previous work I proposed that indeterminacy envelopes could be used to make liberal deductions about consciousness (Gamez 2012a). I have replaced this with the framework that is described in this book.

15 The conservative/liberal distinction is based on a binary opposition between similar and different physical contexts. The differences between physical contexts could also be expressed as a continuous value, which would correspond to the degree of liberality of the deduction.

10. Modification and Enhancement of Consciousness

1 James (1985, p. 388).

2 A fused consciousness would be separately created in my brain and your brain—there would not be any merging of our actual consciousnesses.

3 As explained in Section 2.5, the overall level of consciousness is something like the average level of intensity of the properties and objects in a bubble of experience. This can be reduced with anaesthetics, such as propofol, or by a blow to the head. Chemicals, such as caffeine or LSD, can increase the overall level of intensity. It can also be increased by emotionally intense situations, such as a car crash.

4 Sensory input is the main method that we use to change the contents of our consciousness. If I want an elephant in my bubble of experience, then I go to the zoo and look at an elephant. Hallucinogenic drugs have a strong effect on contents and we have some control over contents in lucid dreams and imagination.

5 This type of experience is well documented (Crookall 1972) and it can be induced through body trauma, mental exercises (Harary and Weintraub 1989; Ophiel 1970), or chemicals, such as ketamine (Wilkins et al. 2011). Out-of-body experiences can also occur in brain-damaged patients (Blanke and Arzy 2005) and psychology experiments can induce the illusion that part or all of our bodies are in a different location (Ehrsson 2007). There is no compelling evidence to suggest that people who are having an out-of-body experience can report information about the physical world that has not been obtained through the senses of their physical body (Alvarado 1982; Blackmore 2010).

6 Sensory manipulation can alter the perceived size of our body in relation to our environment (van der Hoort et al. 2011). Muscimol (found in the mushroom Amanita Muscaria) is reported to be capable of this.

7 Castaneda (1968) describes how he used a combination of mental control and hallucinogens to transform his conscious experience of his body into a crow (the truth of his account has been disputed). Phantom limbs demonstrate that our experiences of our bodies are linked to brain activity and are distinct from our actual physical bodies (Gamez 2007, pp. 57-60; Melzack 1990; Melzack 1992). This suggests that the shape of our consciously experienced bodies can be altered by modifying our brains.

8 Sensory input, such as looking at fearful or beloved objects, changes our emotional states. Chemicals, such as cocaine or Prozac, alter the intensity of our emotional states on short or long time scales.

9 The current size of our bubbles of experience is probably linked to the size of our brains. More brain tissue is likely to be required to expand our bubbles of experience without loss of resolution.

10 Chakravarthi and VanRullen (2012) describe experimental evidence for the discrete nature of conscious perception. VanRullen and Koch (2003) have a more general discussion of this issue. There are well-documented examples of people with expanded long term memories—a condition known as hyperthymesia (Parker et al. 2006). Borges’ (1970) Funes the Memorious is a fictional example.

11 Animals with different senses (for example, bats and fish) are likely to have different sensations. I have suggested elsewhere that conscious sensations might be linked to the neural patterns caused by sensory input, and that our conscious perception of a three-dimensional world could be linked to a combination of sensory and sensorimotor patterns (Gamez 2014b). If this is the case, then a novel sensory pattern would be associated with a novel sensation.

Attempts have been made to create novel sensations. For example, the feelSpace belt gives subjects information about the location of North (Nagel et al. 2005) and magnetic fingertip implants enable people to feel magnetic fields. However, it is not clear whether these devices give people new conscious sensations. This is probably because the novel sensory input is processed through the existing senses, instead of being directly fed into the cortex.

12 We understand the link between changes in sensory input and changes in consciousness, but we do not understand how changes in sensory input lead to changes in the brain that are associated with an altered consciousness. The same is true for imagination and the ingestion of consciousness-modifying chemicals.

13 We would be unlikely to remember some of the conscious states that could be induced in us. Episodic memories regenerate earlier states of our brains. This might not be possible if a CC set is not the consequence of the brain’s own activity.

14 This technology is dramatized in the 1995 film Strange Days. It is different from a virtual reality system, which only mimics the sensory input produced by an environment and has little effect on our conscious experience of our body.

15 Patterns in electromagnetic fields, glia and blood can be indirectly manipulated by changing the neuron activity.

16 See Legon et al. (2014).

17 For example, electrodes have been used to modify the memories of mice (de Lavilleon et al. 2015; Ramirez et al. 2013).

18 For example, see Nikolenko et al. (2007). Electrodes and optogenetics are unlikely to be able to increase a neuron’s firing rate beyond a certain point because of metabolic constraints.

19 Chen et al. (2015) have developed a method for brain stimulation that uses magnetic nanoparticles. Seo et al. (2013) have outlined a design for a wireless brain interface that uses thousands of biologically neutral microsensors to convert electrical signals into ultrasound that can be read outside the brain. This could potentially be extended to deliver signals to the brain.

20 This might be required if we want to expand our spatial and temporal consciousness.

21 Whether a synthetic neuron is a valid member of a CC set will depend on how neurons are p-described (see Section 5.1).

22 Additional neurons would only alter consciousness if CC sets consist of neuron activity patterns or if the additional neurons altered CC sets in some other way—for example, by changing the electromagnetic fields.

23 For example, Yin et al. (2013) have developed a wireless electrode interface that is implanted below the skin.

24 A further problem is that invasive technologies are only allowed under very specific conditions on human subjects. This may change if the safety of these techniques is demonstrated and there is demand or demonstrable benefits. A workable technology will also be appropriated by the public at large regardless of the safety issues or legal constraints. For example, you can buy tDCS kits on the Internet.

25 Huxley (1965, pp. 71-2).

11. Machine Consciousness

1 Metzinger (2003, p. 618).

2 Searle (1980, p. 424).

3 Machine consciousness is also called artificial consciousness. I have presented a version of these types of machine consciousness elsewhere (Gamez 2008b). They overlap with Seth’s (2009) distinction between strong and weak artificial consciousness and have some similarity with Searle’s (1980) distinction between strong and weak artificial intelligence. More information about previous work on machine consciousness is given by Holland (2003), Chella and Manzotti (2007), Gamez (2008b) and Reggia (2013). The International Journal of Machine Consciousness has published many papers on this topic.

4 This is sometimes known as artificial general intelligence (AGI).

5 Arrabales’ (2010) ConsScale ranks systems according to their MC1 consciousness. The Turing test and Harnad’s (1994) variations of it are designed to test whether a system exhibits the full spectrum of human behaviour. There has been a large amount of work on MC1 systems—virtually any computer capable of perception and learning can be interpreted as a MC1 machine.

6 For example, computer models of global workspaces have been built (Franklin 2003; Gamez et al. 2013; Shanahan 2008; Zylberberg et al. 2010).

7 A number of people have used internal models that are updated with sensory data to control robots—for example, Chella, Liotta and Macaluso (2007). Computer models have also been built of imagination (Gravato Marques and Holland 2009) and of sensorimotor theories of consciousness (Chrisley and Parthemore 2007).

8 For example, I have collaborated on a global workspace model implemented in spiking neurons (MC2), which produced human-like behaviour (MC1) in the Unreal Tournament computer game (Gamez et al. 2013).

9 Digital computers that are simulating neurons produce very different electromagnetic fields from biological neurons. Neuromorphic chips use the flow of electrons to model the movement of ions in biological neurons (Indiveri et al. 2011). This type of chip is more likely to produce similar electromagnetic fields to biological neurons.

10 Neurons cultured in a Petri dish have been used to control a virtual animal (Demarse et al. 2001) and a robot (Warwick et al. 2010).

11 I have carried out preliminary experiments that illustrate how deductions can be made about the consciousness of an artificial system (Gamez 2008a; Gamez 2010).

12 The implantation of non-conscious chips that modify CC sets in the brain is covered in Section 10.4. The implantation of chips to study the relationship between consciousness and the physical world is covered in Section 5.4.

13 See Footnote 9.

14 The connections between neurons have traditionally been identified by the laborious method of injecting tracers (Zingg et al. 2014). More promising techniques are starting to emerge that might be able to automatically scan dead human brains. For example, knife-edge scanning microscopes can automatically slice and photograph brain tissue, which enables some of the neurons and connections to be discovered (Mayerich et al. 2008). However, this technique can only identify a limited number of neurons and it cannot reveal the direction of connections. A more promising direction is the automation of electron microscopy to mill and scan blocks of brain tissue. With further development this approach might be able to identify all of the neurons and connections in an adult human brain (Knott et al. 2008). There has also been research into techniques for making dead tissue transparent, which could help us to map the neurons and connections (Yang et al. 2014).

When the neurons and connections have been identified the next challenge is to simulate them on a computer. The adult human brain has approximately 100 billion neurons and 1014 connections. Networks with around a billion point neurons and 1013 connections have been simulated much slower than real time (Ananthanarayanan et al. 2009) and the SpiNNaker project is working towards the goal of simulating a billion neurons in real time (Furber and Temple 2007; Rast et al. 2011). One critical question is how much of the neurons’ structure will need to be simulated to reproduce the brain’s large-scale behaviour. If it is a large amount, then it is going to take a lot longer to reach the point at which it can be done in real time. It is also unlikely that we will be able to realize CC sets in artificial systems by simulating neurons. Neuromorphic chips have a greater chance of reproducing the electromagnetic fields of biological neurons, and it should soon be possible to run a million of these in real time (Benjamin et al. 2014).

15 This possibility has been dramatized in the 2014 film Transcendence. Carbon Copies is a non-profit organization that promotes the scanning and uploading of brains (www.carboncopies.org).

16 Imagine a scenario in which an artificial implant (made from appropriate materials) was added to your brain and the pattern associated with 1% of your consciousness was on the implant, with the rest of the pattern in your brain. The proportion on the implant could be progressively increased until the entire pattern was on the implant. This would still be a process of copying in which the original is progressively lost. At the beginning of this process there would be a consciousness associated with your brain and no consciousness associated with the implant. At the end, a copy of your consciousness would be associated with the implant and your brain would have lost consciousness. In the intermediate cases, there would be some of the original consciousness and some of the copy. This type of gradual replacement of materials happens all the time in the brain as it exchanges atoms with its surroundings. So in practice we cannot avoid the gradual replacement of our consciousness as the material in our brains changes. At best we can minimize such changes—for example, by not agreeing to copies of our consciousness that destroy the original.

The identity of bubbles of experience over time is similar to the identity of physical objects over time. Some changes to a motorbike have minimal impact on our sense of its continuity—for example, replacing the spark plugs. Other changes, such as swapping the chassis or making an atom-for-atom copy have a bigger effect. When large changes are made, I might prefer the original because of its history—it is my motorbike, the motorbike that I rode around the world, and so on. Other people might not care whether they have the original or an atom-for-atom copy. In a similar way, some people might believe that the arrangement of their bubble of experience is what is important—these people are happy as long as this arrangement exists somewhere. This is equivalent to the atom-for-atom copy of the motorbike. Other people prefer the consciousness that is linked to their brain and believe that they will die when this consciousness ceases, regardless of whether a copy has been made somewhere. This is equivalent to preferring the original motorbike with none of its parts replaced.

17 Kaczynski (1996) and Joy (2000) believe that we will increasingly pass responsibility to intelligent machines until we are unable to do without them—in the same way that we are increasingly unable to live without the Internet today. This might eventually leave us at the mercy of super-intelligent machines who could use their power against us. Kaczynski killed three people and injured twenty-three others to raise awareness of this issue.

18 Most people are concerned about machines that behave like conscious human beings, so I am setting aside the possibility that machines could produce non-conscious external behaviour that threatens humanity.

19 We might be able to produce a MC1 machine by scanning a human brain and simulating it on a computer (see Footnote 14). This would not be any more intelligent than us or any more of a threat to humanity than an intelligent human. However, it would be easier to understand and improve than a human brain, so it could be the starting point for more advanced forms of intelligence. In the medium term it might become possible to run simulations of brains faster than biological brains and to run multiple simulated brains in parallel. Deep learning is another promising method for producing MC1 machines. For example, Mnih et al. (2015) used deep reinforcement learning to train a neural network to play 1980s video games with human-level performance.

20 For example, machines would have to have human level intelligence; they would have to be capable of powering and maintaining themselves for long periods of time; military computers would have to be connected to the Internet and inadequately defended against hackers; etc. Machines would also become a threat if they became good at manipulating human behaviour.

21 Chalmers (2010) has a good discussion of the singularity. Eden et al. (2012) have edited a collection of papers on this topic.

22 If we had a mathematical way of measuring intelligence, then genetic algorithms could be used to create systems with a high value of this measure. A number of universal intelligence measures have been put forward (Hernández-Orallo and Dowe 2010; Hibbard 2011; Legg and Hutter 2007), but I am not aware of any that would be suitable for this task.

23 The construction of a system that can produce something that is more intelligent than itself is extremely challenging. It will not happen by accident, but through many years of laborious trial and error. Papers will be published on prototypes, there will be early versions that are partly functional, and so on. Only when the technology has been tried in many different ways is there any possibility that it could create a super-intelligence.

24 One of the most dangerous computer errors was a malfunction in the Soviet nuclear early warning system in 1983, which almost led to a third world war. Asimov (1952) dramatizes some of the problems with malfunctioning intelligent machines.

25 Sloman (2006).

26 This position is put forward by Moravec (1988) and Asimov (1952).

27 For example, murder entails the premature loss of the victim’s consciousness and creates suffering in the bubbles of experience of the bereaved. On the other hand, switching off the life support of a coma patient is not generally considered wrong if the patient is not conscious and if they have no chance of regaining consciousness.

28 This might be the only way in which consciousness and our cultural traditions could survive the death of the sun in 5.4 billion years. It is unlikely that humans will be able to physically travel beyond our solar system to escape the dying sun. Machines can go much further because they can accelerate faster, feed on light and shut down for thousands of years while travelling.

29 Metzinger (2003, p. 621).

12. Conclusion

1 Popper (2002, p. 94).

2 Metzinger describes the current state of consciousness research as follows: ‘The interdisciplinary project of consciousness research, now experiencing such an impressive renaissance with the turn of the century, faces two fundamental problems. First, there is yet no single, unified and paradigmatic theory of consciousness in existence which could serve as an object for constructive criticism and as a backdrop against which new attempts could be formulated. Consciousness research is still in a preparadigmatic stage. Second, there is no systematic and comprehensive catalogue of explananda. Although philosophers have done considerable work on the analysanda, the interdisciplinary community has nothing remotely resembling an agenda for research. We do not yet have a precisely formulated list of explanatory targets which could be used in the construction of systematic research programs.’ (Metzinger 2003, pp. 116-7).

3 For example, whether consciousness is a non-physical substance, the hard problem, solipsism, zombies, colour inversion and the causal relationship between consciousness and the physical world.

4 In the standard version of dualism there is a bidirectional e-causal relationship between non-physical consciousness and the physical world. This is ruled out by assumption A5.

5 Assumptions A7-A9 are more pragmatic and might not be needed by the science of consciousness.

6 NeuroNexus sells electrodes that can record from 256 locations. It is working to expand this to 1,000 electrodes (Marx 2014).

7 See Ahrens and Keller (2013).

8 For example, Shanahan and Wildie (2012) have proposed a ‘knotty centrality’ measure that might be linked to consciousness.

9 An assumption that a mammalian brain is a platinum standard system will have much less impact on the science of consciousness than a similar assumption about an artificial system.

10 Wittgenstein (1969, remark 94).

11 For example, consider the assumption that all conscious states associated with a platinum standard system are linked to c-reports (A2). We could disprove this assumption if we could show that there are aspects of consciousness that are not accessible through c-reports. But since these aspects of consciousness cannot be accessed, we cannot prove that they do or do not exist.

12 This is the position of Berkeley (1957). Husserl (1960) developed his phenomenological program by suspending commitment to the reality of the physical world.

13 Hume (1993, p. 114).

14 At present the most mathematical theories of consciousness are information c-theories, which can be re-interpreted as physical c-theories. For example, Tononi’s (2008) information integration theory of consciousness can be re-interpreted as a theory about the relationship between neuron activity and consciousness. Causal density (Seth et al. 2006), liveliness (Gamez and Aleksander 2011) and Casali et al.’s (2013) perturbational complexity index can also be reinterpreted as physical c-theories.