Can the contents of consciousness be decoded from patterns of integrated information?

[I8R5]

Consciousness is fascinating and elusive. There is “the hard problem” of how the dynamics of matter can give rise to subjective experience. “The hard problem” (Chalmers) is how some philosophers describe their own job, a job that is both appropriately glamorous and career safe, because it is not about to be taken away from them anytime soon and so difficult that lack of progress in our lifetime cannot reasonably be held against them. Brain scientists are left with “the easy problem” of explaining how the brain supports perception, cognition, and action. What’s taking so long?

Transcending this division of labour, at the intersection between philosophy and brain science, researchers are working on what Anil Seth has called “the real problem”:

“how to account for the various properties of consciousness in terms of biological mechanisms; without pretending it doesn’t exist (easy problem) and without worrying too much about explaining its existence in the first place (hard problem).”

There is a range of interesting ideas toward a theory of consciousness, from metaphors like “fame in the brain” to detailed accounts like the “neuronal global workspace” (Baars, Dehaene). In my mind, it remains unclear to what extent existing proposals are alternative theories that are mutually exclusive or complementary descriptions of the same set of phenomena.

One of the more inspiring sets of ideas about consciousness is integrated information theory (Tononi). IIT posits that consciousness arises from the interactions between the parts of a physical system and allows of degrees. The degree of consciousness of a system can be measured by an index of the overall interactivity among the parts.

States of heightened consciousness are states in which we experience an enhanced capacity to bring together in the present moment all we perceive and all we know with our needs and goals, toward adaptive action.

Our brains, mysteriously, perform an amazing feat of flexible integration of information across many scales of time (from long-term memories to our current situational model and to the momentary glimpse, in which we sense the states of motion of the objects around us), across our peripersonal space (from the scene surrounding us, in memory, to the fixated point), and across sensory modalities (as we combine what we see, hear, feel, smell and taste into an amodal percept of the scene).

And this is just the perceptual part of the process, which is integrated with our sense of current needs and goals to guide our action. It is plausible that this feat of intelligence, which is unmatched by current AI systems, requires high-bandwidth interactions between the brain components that sustain it. IIT suggests that those pieces of information, from perception or memory, that are currently most richly interrelated are the conscious ones. This doesn’t follow, but it is an interesting idea.

Intuitions about social interaction similarly suggest that interactivity is essential for efficient information processing. For example, it is difficult to imagine a team of people working together optimally efficiently on a complex task, if a subset of them is not integrated, i.e. does not interact with the rest of the group. Of course, there are simple tasks, for which independent toiling is optimal. I’m thinking here of tasks that do not require considering all the relationships between subsets of the input. But for complex tasks, like writing a paper, we might expect substantial interactivity to be required.

In computer science, NP hard tasks are those, for which no trick exists that would enable us to partition the elements into a manageable set of subsets, and tackle each in turn. Instead relationships among elements may need to be considered for all subsets, and the number of subsets is exponential in the number of the elements. The elements have to be brought into contact somehow, so we expect the system that can solve the task efficiently to be highly interactive.

A key idea of IIT is that a conscious system should be well integrated in the sense that no matter how we partition it, the partitions are highly interactive. IIT uses information theoretic measures to quantify integrated information. These measures are related to Granger causality. For two components A and B, A is said to Granger-cause B if the past values of A help predict B, beyond what can be achieved by considering only the past of B itself. For the same system composed of parts A and B, a measure of integrated information would assess to what extent taking the interactions between A and B into account enables us to better predict the state of the system (comprising both A and B) than ignoring the interactions.

For a more complex system, integrated information measures consider all subsets. The integrated information of the whole is the maximum of the integrated information values of the subsets. In other words, the system inherits its level of integrated information φmax from its most strongly interactive clique of components. Each subset’s interactivity is judged by the degree to which it cannot be partitioned (and interactions across partitions ignored) in predicting the current state from the past. A system is considered highly interactive if any partitioning greatly reduces an estimate of the mutual information between its past and present states.

Note that to achieve high integrated information, the information flow must not simply spread the information, rendering it redundant across the parts. Rather the information in different parts must be complementary and must be encoded such that it needs to be considered jointly to reveal its meaning.

For example, consider binary variables X, Y, and Z. X and Y are independent uniform random variables and Z = X xor Y, i.e. Z=1 if either X or Y is 1, but not both. Each variable then has an entropy of one bit. X and Y each singly contain no information about Z. Being told X does not tell us anything about Z, because Y is needed to interpret the information X conveys about Z. Conversely, X is needed to interpret the information Y conveys about Z. X and Y together perfectly determine Z. (The mutual information I(X;Z) = 0, the mutual information I(Y;Z)=0, but the mutual information I(X,Y;Z) = 1 bit.)

screenshot1365

Figure | The continuous flash suppression paradigm used by the authors. A stimulus presented to one eye is rendered invisible by a sequence of Mondrian images presented to the other eye.

In a new paper, Haun, Oizumi, Kovach, Kawasaki, Oya, Howard, Adolphs, and Tsuchiya (pp2016) derive some interesting predictions from integrated information theory and test them with electrocorticography, measuring neuronal activity in human patients that have implanted subdural electrodes in their brains. The authors use the established psychophysical paradigms of continuous flash suppression and backward masking to render stimuli that are processed in cortex subjectively invisible and their representations, thus, unconscious.

The paper uses the previously described measure φ* of integrated information. This measure uses estimates of mutual information between past and present states of a set of measurement channels. The mutual information is estimated on the basis of multivariate Gaussian assumptions. Computing φ* involves estimating the effects of partitioning the set of channels, by modelling the partition distributions as independent (i.e. the joint distribution obtains as the product of the partitions’ distribution). φ* is the loss in system past-to-present predictability incurred by the least destructive partitioning.

The paper introduces the concept of the φ* pattern, the pattern of φ* estimates across subsets of components of the system (where electrodes pragmatically serve to define the components). The  φ* pattern is hypothesized to reflect the compositional structure of the conscious percept.

Results suggest that stronger φ* values for certain sets of electrodes in the fusiform gyrus, which pick up on face-selective responses, are associated with conscious percepts of faces (as opposed to Mondrian images or visual noise). This association holds even across sets of trials, where the physical stimulus was identical and only the internal dynamics rendered the face representation conscious or unconscious. The authors argue that these results support IIT and suggest that the φ* pattern reflects information about the conscious percept.

 

Strengths

  • The ideas in the paper are creative, provocative, and inspiring.
  • The paper uses well-established psychophysical paradigms to control the contents of consciousness and disentangle conscious perception from stimulus representation.
  • The φ* measure is well motivated by IIT and has been introduced in earlier work involving some of the authors – even if its relationship to consciousness is speculative.

 

Weaknesses

  • The authors introduce the φ* pattern and hypothesize that it reflects the compositional structure of conscious content. However, theoretically, it is unclear why it should be that pattern across subsets of components, rather than simply the pattern across components that reflects the compositional structure of conscious content. Empirically, results are most parsimoniously summarised by saying that φ* tends to be larger when the content represented by the underlying neuronal population is conscious. The evidence for a reflection of the compositional structure of conscious content in the φ* pattern is weak.
  • It is unclear how φ* is related to the overall activity in sets of neurons selective for the perceptual content in question (faces here). This leaves open the possibility that face selective neurons are simply more active when the face percept is conscious and this greater activity is associated with greater interactivity among the neurons, reflecting their structural connectivity.
  • The finding that the alternative measures, state entropy H and (past-present) mutual information I, are less predictive of conscious percepts does not provide strong constraints on theory, because these measures are not particularly plausible to begin with and no compelling theoretical motivation is given for them.
  • IIT suggests that integrated information across the entire brain supports consciousness. An unavoidable challenge for empirical studies, as the authors appropriately discuss, is the limitation of the φ* estimates to small sets of empirical measurements of brain activity.

 

Particular points the authors may wish to address in revision

(1) Are face-selective populations of neurons simply more active when a face is consciously perceived and φ* rises as an epiphenomenon of greater activity in the interconnected set of neurons?

It is left unclear whether the level of percept-specific neuronal activity provides a comparably good or better neural correlate of conscious content. The data presented have been analysed with more conventional activity-based pattern classification in Baroni et al. (pp2016) and results suggest that this also works. What if the substrate of consciousness is simply strong activity or activity in certain frequency bands and the φ* just happens to be a measure correlated with those simpler measures in a population of neurons? After all, we would expect an interconnected neuronal population to exhibit greater dynamic interactivity when it is strongly driven by a stimulus. The key challenge left unaddressed is to demonstrate that φ* cannot be reduced to this classical neuronal correlate of perceptual content. Do the two tend to be correlated? Can they be disentangled experimentally?

A compelling demonstration would be to show that φ* (or another IIT-motivated measure) captures variance in conscious content that is not explained by conventional decoding features. For example, two populations of neurons – one coding a face, the other a Mondrian – might be equally activated overall by a stimulus containing both a face and a Mondrian, but φ* computed for each population might enable us to predict the consciously perceived stimulus on a trial-by-trial basis.

 

(2) Does the φ* pattern reflect the conscious percept and its compositional structure?

A demonstration that the φ* pattern (across subsets) reflects the compositional structure of the content of consciousness would require an experiment eliciting a wider range of conscious percepts that are composed of a set of elements in different combinations.

The authors’ hypothesis would then have to be compared to a range of simpler hypotheses about the neural correlates of compositional conscious content (NC4) , including the following:

  • the pattern of activity across content-selective neural sites
    (rather than the across a subsets of sites)
  • the pattern of activity across subsets of sites
  • the connectivity across content-selective neural sites (where connectivity could be measured by synchrony, coherence, Granger causality or any other measure of the relationship between two sites)
  • the connectivity across content-selective neural subsets of sites

This list could be expanded indefinitely and could include a variety of IIT-inspired but distinct NC4s. There are many ideas that are similarly theoretically plausible, so empirical tests might be the best way forward.

In the discussion the authors argue that integrated information has greater a priori theoretical support than arbitrary alternative neural correlates of consciousness. There is some truth to that. However, the theoretical motivation, while plausible and and interesting, is not so uniquely compelling that it supports lowering the bar of empirical confirmation for IIT measures.

 

(3) Might the selection of channels by maximum φ* have introduced a bias to the analyses?

I understand that the selection was performed without using the conscious/unconscious trial labels. However, conscious percepts are likely to be associated with greater activity, and φ* might be confounded by greater activity. More generally, selection biases are often complicated, and without a compelling demonstration that there can be no selection bias, it is difficult to be confident. A simple way to rule out selection bias is to use independent data for selection and selective analysis.

 

– Nikolaus Kriegeskorte

 

Acknowledgement

I thank Kate Storrs for discussing integrated information theory with me.

Discrete-event-sequence model reveals the multi-time-scale brain representation of experience and recall

[I8R7]

Baldassano, Chen, Zadbood, Pillow, Hasson & Norman (pp2016) investigated brain representations of event sequences with fMRI. The paper argues in favour of an intriguing and comprehensive account of the representation of event sequences in the brain as we experience them, their storage in episodic memory, and their later recall.

The overall story is quite amazing and goes like this: Event sequences are represented at multiple time scales across brain regions during experience. The brain somehow parses the continuous stream of experience into discrete pieces, called events. This temporal segmentation occurs at multiple temporal scales, corresponding perhaps to a tree of higher-level (longer) events and subevents. Whether the lower-level events precisely subdivide higher-level events (rendering the multiscale structure a tree) is an open question, but at least different regions represent event structure at different scales. Each brain region has its particular time scale and represents an event as a spatial pattern of activity. The encoding in episodic memory does not occur continuously, but rather in bursts following the event transitions at one of the longer time scales. During recall from memory, the event representations are reinstated, initially in the higher-level regions, from which the more detailed temporal structure may come to be associated in the lower-level regions. Event representations can arise from perceptual experience (a movie here), recall (telling the story), or from listening to a narration. If the event sequence of a narration is familiar, memory recall can help reinstate representations upcoming in the narration in advance.

There’s previous evidence for event segmentation (Zacks et al. 2007) and multi-time-scale representation (from regional-mean activation to movies that are temporally scrambled at different temporal scales; Hasson et al. 2008; see also Hasson et al. 2015) and for increased hippocampal activity at event boundaries (Ben-Yakov et al. 2013). However, the present study investigates pattern representations and introduces a novel model for discovering the inherent sequence of event representations in regional multivariate fMRI pattern time courses.

The model assumes that a region represents each event k = 1..K as a static spatial pattern mk of activity that lasts for the duration of the event and is followed by a different static pattern mk+1 representing the next event. This idea is formalised in a Hidden Markov Model with K hidden states arranged in sequence with transitions (to the next time point) leading either to the same state (remain) or to the next state (switch). Each state k is associated with a regional activity pattern mk, which remains static for the duration of the state (the event). The number of events for a given region’s representation of, say, 50 minutes’ experience of a movie is chosen so as to maximise within-event minus between-event pattern correlation on a held-out subject.

It’s a highly inspired paper and a fun read. Many of the analyses are compelling. The authors argue for such a comprehensive set of claims that it’s a tall order for any single paper to fully substantiate all of them. My feeling is that the authors are definitely onto something. However, as usual there may be alternative explanations for some of the results and I am left with many questions.

 

Strengths

  • The paper is very ambitious, both in terms brain theory and in terms of analysis methodology.
  • The Hidden Markov Model of event sequence representation is well motivated, original, and exciting. I think this has great potential for future studies.
  • The overall account of multi-time-scale event representation, episodic memory encoding, and recall is plausible and fascinating.

 

Weaknesses

  • Incomplete description and validation of the new method: The Hidden Markov Model is great and quite well described. However, the paper covers a lot of ground, both in terms of the different data sets, the range of phenomena tackled (experience, memory, recall, multimodal representation, memory-based prediction), the brain regions analysed (many regions across the entire brain), and the methodology (novel complex method). This is impressive, but it also means that there is not much space to fully explain everything. As a result there are several important aspects of the analysis that I am not confident I fully understood. It would be good to describe the new method in a separate paper where there is enough space to validate and discuss it in detail. In addition, the present paper needs a methods figure and a more step-by-step description to explain the pattern analyses.
  • The content and spatial grain of the event representations is unclear. The analyses focus on the sequence of events and the degree to which the measured pattern is more similar within than between inferred event boundaries. Although this is a good idea, I would have more confidence in the claims if the content of the representations was explicitly investigated (e.g. representational patterns that recur during the movie experience could represent recurring elements of the scenes).
  • Not all claims are fully justified. The paper claims that events are represented by static patterns, but this is a model assumption, not something demonstrated with the data. It’s also claimed that event boundaries trigger storage in long-term memory, but hippocampal activity appears to rise before event boundaries (with the hemodynamic peak slightly after the boundaries). The paper could even more clearly explain exactly what previous studies showed, what was assumed in the model (e.g. static spatial activity patterns representing the current event) and what was discovered from the data (event sequence in each region).

 

Particular points the authors may wish to address in revision

 (1) Do the analyses reflect fine-grained pattern representations?

The description of exactly how evidence is related between subjects is not entirely clear. However, several statements suggest that the analysis assumes that representational patterns are aligned across subjects, such that they can be directly compared and averaged across subjects. The MNI-based intersubject correspondency is going to be very imprecise. I would expect that the assumption of intersubject spatial correspondence lowers the de facto resolution from 3 mm to about a centimetre. The searchlight was a very big (7 voxels = 2.1cm)3 cube, so perhaps still contained some coarse-scale pattern information.

However, even if there is evidence for some degree of intersubject spatial correspondence (as the authors say results in Chen et al. 2016 suggest), I think it would be preferable to perform the analyses in a way that is sensitive also to fine-grained pattern information that does not align across subjects in MNI space. To this end patterns could be appended, instead of averaged, across subjects along the spatial (i.e. voxel) dimension, or higher-level statistics, such as time-by-time pattern dissimilarities, could averaged across subjects.

If the analyses really rely on MNI intersubject correspondency, then the term “fine-grained” seems inappropriate. In either case, the question of the grain of the representational patterns should be explicitly discussed.

 

(2) What is the content of the event representations?

The Hidden Markov Model is great for capturing the boundaries between events. However, it does not capture the meaning and relationships between the event representations. It would be great to see the full time-by-time representational dissimilarity matrices (RDMs; or pattern similarity matrices) for multiple regions (and for single subjects and averaged across subjects). It would also be useful to average the dissimilarities within each pair of events to obtain event-by-event RDMs. These should reveal, when events recur in the movie, and the degree of similarity of different events in each brain region. If each event were unique in the movie experience, these RDMs would have a diagonal structure. Analysing the content of the event representations in some way seems essential to the interpretation that the patterns represent events.

 

(3) Why do the time-by-time pattern similarity matrices look so low-dimensional?

The pattern correlations shown in Figure 2 for precuneus and V1 are very high in absolute value and seem to span the entire range from -1 to 1. (Are the patterns averaged across all subjects?) It looks like two events either have highly correlated or highly anticorrelated patterns. This would suggest that there are only two event representations and each event falls into one of two categories. Perhaps there are intermediate values, but the structure of these matrices looks low-dimensional (essentially 1 dimensional) to me. The strong negative correlations might be produced by the way the data are processed, which could be more clearly described. For example, if the ensemble of patterns were centered in the response space by subtracting the mean pattern from each pattern, then strong negative correlations would arise.

I am wondering to what extent these matrices might reflect coarse-scale overall activation fluctuations rather than detailed representations of individual events. The correlation distance removes the mean from each pattern, but usually different voxels respond with different gains, so activation scales rather than translates the pattern up. When patterns are centered in response space, 1-dimensional overall activation dynamics can lead to the appearance of correlated and anticorrelated pattern states (along with intermediate correlations) as seen here.

This concern relates also to points (1) and (2) above and could be addressed by analysing fine-grained within-subject patterns and the content of the event representations.

screenshot1350

Detail from Figure 2: Time-by-time regional spatial-pattern correlation matrices.
Precuneus (top) and V1 (bottom).

 

(4) Do brain regions really represent a discrete sequence of events by a discrete sequence of patterns?

The paper currently claims to show that brain regions represent events as static patterns, with sudden switches at the event boundaries. However, this is not something that is demonstrated from the data, rather it is the assumption built into the Hidden Markov Model.

I very much like the Hidden Markov Model, because it provides a data-driven way to discover the event boundaries. The model assumption of static patterns and sudden switches are fine for this purpose because they may provide an approximation to what is really going on. Sudden switches are plausible, since transitions between events are sudden cognitive phenomena. However, it seems unlikely that patterns are static within events. This claim should be removed or substantiated by an inferential comparison of the static-pattern sequence model with an alternative model that allows for dynamic patterns within each event.

 

(5) Why use the contrast of within- and between-event pattern correlation in held-out subjects as the criterion for evaluating the performance of the Hidden Markov Model?

If patterns are assumed to be aligned between subjects, the Hidden Markov Model could be used to directly predict the pattern time course in a held-out subject. (Predicting the average of the training subjects’ pattern time courses would provide a noise ceiling.) The within- minus between-event pattern correlation has the advantage that it doesn’t require the assumption of intersubject pattern alignment, but this advantage appears not to be exploited here. The within- minus between-event pattern correlation seems problematic here because patterns acquired closer in time tend to be more similar (Henriksson et al. 2015). First, the average within-event correlation should always tend to be larger than the average between-event correlation (unless the average between-event correlation were estimated from the same distribution of temporal lags). Such a positive bias would be no problem for comparing between different segmentations. However, if temporally close patterns are more similar, then even in the absence of any event structure, we expect that a certain number of events best captures the similarity among temporally closeby patterns. The inference of the best number of events would then be biased toward the number of events, which best captures the continuous autocorrelation.

 

(6) More details on the recall reactivation

Fig. 5a is great. However, this is a complex analysis and it would be good to see this in all subjects and to also see the movie-to-recall pattern similarity matrix, with the human annotations-based and Hidden Markov Model-based time-warp trajectories superimposed. This would enable us to better understand the data and how the Hidden Markov Model arrives at the assignment of corresponding events.

In addition, it would be good to show statistically, that the Hidden Markov Model predicts the content correspondence between movie and recall representations consistently with the human annotations.

 

(7) fMRI is a hemodynamic measure, not “neural data”.

“Using a data-driven event segmentation model that can identify temporal structure directly from neural measurements”; “Our results are the first to demonstrate a number of key predictions of event segmentation theory (Zacks et al., 2007) directly from neural data”

There are a couple of other places, where “neural data” is used. Better terms include “fMRI data” and “brain activity patterns”.

 

(8) Is the structure of the multi-time-scale event segmentation a tree?

Do all regions that represent the same time-scale have the same event boundaries? Or do they provide alternative temporal segmentations? If it is the former, do short-time-scale regions strictly subdivide the segmentation of longer-time-scale regions, thus making the event structure a tree? Fig. 1 appears to be designed so as not to imply this claim. Data, of course, is noisy, so we don’t expect a perfect tree to emerge in the analysis, even if our brains did segment experience into a perfect tree. It would be good to perform an explicit statistical comparison between the temporal-tree event segmentation hypothesis and the more general multi-time-scale event segmentation hypothesis.

 

(9) Isn’t it a foregone conclusion that longer-time-scale regions’ temporal boundaries will match better to human annotated boundaries?

“We then measured, for each brain searchlight, what fraction of its neurally-defined boundaries were close to (within three time points of) a human-labeled event boundary.”

For a region with twice as many boundaries as another region, this fraction is halved even if both regions match all human labeled events. This analysis therefore appears strongly confounded by the number of events a regions represents.

The confound could be removed by having humans segment the movie at multiple scales (or having them segment at a short time scale and assign saliency ratings to the boundaries). The number of events could then be matched before comparing segmentations between human observers and brain regions.

Conversely, and without requiring more human annotations, the HMM could be constrained to the number of events labelled by humans for each searchlight location. This would ensure that the fraction of matches to human observers’ boundaries can be compared between regions.

 

(10) Hippocampus response does not appear to be “triggered” by the end of the event, but starts much earlier.

The hemodynamic peak is about 2-3 s after the event boundary, so we should expect the neural activity to begin well before the event boundary.

 

(11) Is the time scale a region represents reflected in the temporal power spectrum of spontaneous fluctuations?

The studies presenting such evidence are cited, but it would be good to look at the temporal power spectrum also for the present data and relate these two perspectives. I don’t think the case for event representation by static patterns is quite compelling (yet). Looking at the data also from this perspective may help us get a fuller picture.

 

(12) The title and some of the terminology is ambiguous

The title “Discovering event structure in continuous narrative perception and memory” is, perhaps intentionally, ambiguous. It is unclear who or what “discovers” the event structure. On the one hand, the brain that discovers event structure in the stream of experience. On the other hand, the Hidden Markov Model discovers good segmentations of regional pattern time courses. Although both interpretations work in retrospect, I would prefer a title that makes a point that’s clear from the beginning.

On a related note, the phrase “data-driven event segmentation model” suggests that the model performs the task of segmenting the sensory stream into events. This was initially confusing to me. In fact, what is used here is a brain-data-driven pattern time course segmentation model.

 

(13) Selection bias?

I was wondering about the possibility of selection bias (showing the data selected by brain mapping, which is biased by the selection process) for some of the figures, including Figs. 2, 4, and 7. It’s hard to resist illustrating the effects by showing selected data, but it can be misleading. Are the analyses for single searchlights? Could they be crossvalidated?

 

(14) Cubic searchlight

A spherical or surface-based searchlight would the better than a (2.1 cm)3 cube.

 

– Nikolaus Kriegeskorte

 

Acknowledgement

I thank Aya Ben-Yakov for discussing this paper with me.