Pattern-component modelling disentangles the code and the noise in representational similarity analysis

[R8I8]

This paper proposes an interesting and potentially important extension to representational similarity analysis (RSA), which promises unbiased estimates of response-pattern similarities and more compelling comparisons of representations between different brain regions.

RSA consists in the analysis of the similarity structure of the representations of different stimuli (or mental states associated with different tasks) in a region of interest (ROI). To this end, the similarity of regional response patterns elicited by the different stimuli is estimated, typically by using their linear correlation coefficient across voxels (or neurons or recording sites in electrophysiology). It is often desirable to be able to compare these pattern similarities between different regions. For example, we would like to be able to address whether stimuli A and B elicit more highly correlated response patterns in region 1 or region 2. However, such comparisons are problematic, because the pattern correlations depend on fMRI noise (which might be different between the regions), voxel selection (e.g. selecting additional noisy voxels will reduce the pattern correlation), and unspecific pattern components (e.g. a strong shared component between all stimuli will increase the pattern correlation, with the high correlation not specific to the particular pair of stimuli).

ScreenShot687
Pattern-component modelling yields estimates of the similarity of representational patterns that are not systematically distorted by noise and common components. Representational pattern similarity is measured here by the correlation across measurement channels (e.g. fMRI voxels) and is plotted as a function of the noise level (horizontal axes) for different amplitudes (shades of gray) of a common pattern component shared by both representational patterns. Figure from Diedrichsen et al. (2011).

When representational dissimilarities (or, equivalently similarities) are estimated from estimates of response patterns in a multidimensional space, the dissimilarity estimates are positively (or the similarity estimates negatively) biased. This is because the inevitable noise affecting the pattern estimates will typically increase the apparent distance between any two patterns (the probability of a decrease of the distance due to noise is 0.5 in 1 dimension and drops rapidly as dimensionality increases).

Instead of estimating the distances from pattern estimates, the authors therefore propose to estimate the distances from a covariance component model that captures the pattern variances and covariances across space. The approach requires that each stimulus (or, more generally, each experimental condition) has been repeated multiple times to yield multiple pattern estimates. Whereas simple RSA would consider the average pattern for each stimulus, the authors’ approach models the original trial-by-voxel matrix Y as a linear combination of a set of stimulus-related patterns U (thought to underlie the observed patterns) and  noise, and estimates the covariance structure of the patterns. The noise E is assumed to be independent between trials, but there is no assumption of independence of the noise between voxels. This is important because fMRI error time series from voxels closeby within a region are known to be correlated.

This is an original and potentially important contribution. The core mathematical model appears well developed. The demonstration of the advantages of the method is compellingly demonstrated based on simulated data. The paper is well written. However, it requires a number of improvements to ensure that it successfully communicates its important message. (1) The authors should more clearly explain the assumptions their pattern-covariance modelling approach relies upon. (2) The authors should add a section explaining the practical application of the approach (3) A number of clarifications and didactical improvements, notably to the presentation of the analysis of the real fMRI data, would be desirable. These three major points are explained in detail below.

[This is my original secret peer review of Diedrichsen et al. (2011). Most of the suggestions for improvements were addressed in revision and are no longer relevant.]

MAJOR POINTS

(1) Assumptions and consequences of violations

The advantages of pattern-covariance modeling are well explained. However, the assumptions of this approach should be more clearly communicated, perhaps in a separate section.

  • Does the validity of the approach depend on assumptions about the probability densities of the response amplitudes? Are there any other assumptions about the nature of the response patterns?
  • What are the effects of violations of the assumptions? Please give examples of cases where the assumptions are violated and describe the expected effects on the analysis.
  • As long as statistical inference is performed at the level of the variability across subjects or by using randomisation testing, results might be robust to certain violations. Please clarify if and when this is the case.

 

(2) Practical application of the new approach

Please add a section explaining how to apply this method to fMRI data, addressing the following questions:

  • Do the authors plan to make matlab code available for the new method? If so, it would be good to state this in the paper.
  • Is there a function that takes the regional data matrix Y, the design matrix Z (including effects of no interest) and perhaps a predictor selection vector for selecting effects of interest as input and returns the corrected correlation (and perhaps Euclidean) distance matrix?
  • Does the method only work with slow event-related designs (with approximately independent trial estimates)?
  • Can we use the method on rapid-event-related designs where we do not have separate single-trial estimates (because single-trial responses overlap in time and multiple trials of the same condition must be estimated together for stability)?
  • What if we have only one pattern estimate per condition, because our design is condition-rich (e.g. 96 conditions as in Kriegeskorte et al. 2008) and rapid-event related?
  • More generally, what are the requirements and limitations of the proposed approach?

 

(3) Particular clarifications and didactical improvements

In classical multivariate regression, we get an estimate of the error of a spatial response pattern estimate as a multinormal (characterised by a scaled version of the voxel-by-voxel covariance matrix of the residuals, where the scaling factor reflects the amount of averaging for the case of binary nonoverlapping predictors, and, more generally, the sums of squares and products of the design matrix). Couldn’t this multinormal model of the variability of each condition-related pattern estimate be used to get an unbiased estimate of the correlation of each pair of pattern estimates? If so, would this approach be inferior or superior to the proposed method, and why?

  1. 7: What exactly are the ‘simplifying assumptions’ that allow a to be estimated independently of G by averaging the trial response patterns within conditions?

“The corrected estimate from the covariance-component model is unbiased over a large range of parameter settings.” What are the limits of this range? Is the estimate formally unbiased or just approximately so?

Can question a) “Does the region encode information about the finger in the movement and/or stimulation condition?” be addressed with the traditional and the proposed RSA method? It seems that that would necessitate estimating the replicability of patterns elicited by moving the same finger (and similarly for sensation). It is a typical and important neuroscientific question, so please consider addressing in the framework of RSA (not just in terms of a possible classifier analysis as in the current draft).

Across different runs, pattern correlations are usually found to be much lower (e.g. Misaki et al. 2010). This phenomenon requires further investigation. The authors suggest error correlations among trials closeby in time within a run as the cause. However, I suspect that such error correlations, though clearly present, might not be the major cause of this. Other causes include scanner drifts and greater head-motion-related disalignment (due to greater separation in time), which can cause distortions, that head-motion-correction cannot undo. It would be good to hear the authors’ assessment of these alternative causes.

The notation u_beta[c,1,…4], where c is an element of {1,2} is confusing to me. Shouldn’t it be u_beta[c,d], where c is an element of {1,2}, and d is an element of {1,2,3,4}?

Eq. 8 requires more unpacking. Perhaps a figure with the vertical and horizontal dimensions marked (“task effects: movement vs sensation”, “individual finger effects: (1) movement, (2) sensation”) and arrows pointing from conceptual labels (“shared pattern between all movement trials”, “shared pattern between all sensation trials”, etc.) to the variance components could serve this function.

Figures 1-4 are great.

Figures 6 and 7: This comparison between traditional RSA and the proposed method is not completely successful. Figure 6 the traditional approach is very comprehensible. Figure 7 is cryptic (partly due to lack of meaningful labeling of the vertical axes). Moreover, the relationship between the traditional and the proposed approach to RSA remains unclear (or anyway difficult to grasp at a glance). I suggest adding a figure that compares traditional RSA and the proposed method side by side. The top row should show the correlation matrices (sample correlation versus unbiased estimates from covariance component model). The next three rows should address the three questions raised in the text: “a) Does the region encode information about the finger in the movement and/or stimulation condition? b) Are the patterns evoked by movement of a given finger similar to the patterns evoked by stimulation of the same finger? c) Is this similarity greater in one region than another?” Results from the traditional and the proposed RSA should be shown for each question to demonstrate how the results appear in both approaches and where the traditional approach falls short.

 

 

MINOR POINTS

In Eq. 8, u_beta[1,2] should read u_beta[1,1], I think.

“The decomposition method offers an elegant way to control for all these possible influences on the size of the correlation coefficients. In addition to noise (ε), condition ( , ), and finger ( , ) effects (Eq. 7), we also added a run effect.” Should say Eq. 8, I think.

Does U stand for ‘(u)nderlying patters’ and a for spatial-average (a)ctivation? It would help to make this explicit.

Figure 6 : Please label the vertical axes (intuitive and clear conceptual label). Please mark all significant effects. Please add a colorbar (grayscale code for correlation). Legend: “(D) These correlations” Which correlations exactly? (Averaged across sense and move now?)

Figure 7: The vertical axes need to be intuitively labeled. The reader should not have to decode mathematical symbols from the legend to understand the meaning of the bar graphs. Even after a careful read of the legend (and after spending quite a bit of time on the paper), the neuroscientific findings are not easy to grasp here. As a result, the present version of this figure will leave readers preferring traditional RSA (Figure 6) as it at least can be interpreted without much effort. Please label gray and white (“sense” and “move”) bars as in Figure 6.

 

 

 

 

Imagining and seeing objects elicits consistent category-average activity patterns in the ventral stream

[R8I7]

Horikawa and Kamitani report results of a conceptually beautiful and technically sophisticated study decoding the category of imagined objects. They trained linear models to decode visual image features from fMRI voxel patterns. The visual features are computed from images by computational models including GIST and the AlexNet deep convolutional neural net. AlexNet provides features spanning the range from visual to semantic. A subject is then scanned while imagining images from a novel object category (not used in training the fMRI decoder). The decoder is used to predict the computational-model representation for the imagined category (averaged across exemplars of that category). This predicted model representation is then compared to the actual model representation for many categories, including the imagined one. The model representation predicted from fMRI during imagery is shown to be significantly more similar to the model representation of images from the imagined category than to the model representation of images from other categories.

ScreenShot685

Figure from Horikawa & Kamitani (2015)

The methods are sophisticated and will give experts much to think about and draw from in developing better decoders. Comprehensive supplementary analyses, which I did not have time to fully review, complement and extend the thorough analyses provided. This is a great study. As usual in our field, a difficult question is what exactly it means for brain computational theory.

A few results that might speak to the computational mechanism of the ventral stream are as follows.

When predicting computational features of *single images* (which was only done for seen, not for imagined objects):

  • Lower layers of AlexNet are better predicted from voxels in lower ventral-stream areas.
  • Higher layers of AlexNet are better predicted from voxels in higher ventral-stream areas.
  • GIST features are best predicted from V1-3, but also significantly from higher areas.

This is consistent with the recent findings (Yamins, Khaligh-Razavi, Cadieu, Guclu) showing that deep convolutional neural nets explain lower- and higher-level ventral-stream areas with a rough correspondence of lower model layers to lower brain areas and higher model layers to higher brain areas. It is also consistent with previous findings that GIST, like many visual feature models, explains significant representational variance even in the higher ventral-stream representation (Khaligh-Razavi, Rice), but does not reach the noise ceiling (indicating that a data set is fully explained), as deep neural net models do (Khaligh-Razavi).

When predicting *category-averages* of computational features (which was done for seen and imagined objects):

  • Higher-level visual areas better predict features in all layers of AlexNet.
  • Higher layers of AlexNet are better predicted from voxels in all visual areas.

This is confusing, until we remember that it is category averages that are being predicted. Category averaging will retain a major portion of the representational variance of category-sensitive higher-level representations, while reducing the representational variance of low-level representations that are less related to categories. This may boost both predictions from category-related visual areas, as well as predictions of category-related model features.

Subjects imagined many different images from a given category in an experimental block during fMRI. The category-average imagery activity of the voxels was then used to predict the corresponding category-averages of the computational-model features. As expected, category-average computational-feature prediction is worse for mental imagery than for perception. The pattern across visual areas and AlexNet layers is similar for imagery and perception, with higher predictions resulting when the predicting visual area is category-related and when the predicted model feature is category-related. However, V1 and V2 did not consistently enable imagery decoding into the format of any of the layers of AlexNet. Interestingly, computational features more related to categories were better decodable. This supports the view that higher ventral-stream features might be optimised to emphasise categorical divisions (cf Jozwik et al. 2015).

 

Suggested improvements

(1) Clarify any evidence about the representational format in which the imagined content is represented. The authors’ model predicts both visual and semantic features of imagined object categories. This suggests that imagery involves both semantic and visual representations. However, the evidence for lower- or even mid-level visual representation of imagined objects is not very compelling here, because the imagery was not restricted to particular images. Instead the category-average imagery activity was measured. Each category is, of course, associated with particular visual features to some extent. We therefore expect to be able to predict category-average visual features from category-average voxel patterns better than chance. A strong claim that imagery paints low-level visual features into early visual representations would require imagery of particular images within each category. For relevant evidence, see Naselaris et al. (2015).

(2) Go beyond the decoding spin: what do we learn about computations in the ventral stream? Being able to decode brain representations is cool because it demonstrates unambiguously that a certain kind of information is present in a brain region. It’s even cooler to be able to decode into an open space of features or categories and to decode internally generated representations as done here. Nevertheless, the approach of decoding is also scientifically limiting. From the present version of the paper, the message I take is summarised in the title of the review: “Imagining and seeing objects elicits consistent category-average activity patterns in the ventral stream”. This has been shown previously (e.g. Stokes, Lee), but is greatly generalised here and is a finding so important that it is good to have it replicated and generalised in multiple studies. The reason why I can’t currently take a stronger computational claim from the paper is that we already know that category-related activity patterns cluster hierarchically in the ventral stream (Kriegeskorte et al. 2008) and may be continuously and smoothly related to a semantic space (Mitchell et al. 2008; Huth et al. 2012). In the context of these two pieces of knowledge, consistent category-average activity for perception and imagery is all that is needed to explain the present findings of decodability of novel imagined categories. The challenge to the authors: Can you test specific computational hypotheses and show something more on the basis of this impressive experiment? The semantic space analysis goes in this direction, but did not appear to me to support totally novel theoretical conclusions.

(3) Why decode computational features? Decoding of imagined content could be achieved either by predicting measured activity patterns from model representations of the stimuli (e.g. Kay et al. 2008) or by predicting model representations  from measured activity patterns (the present approach). The former approach is motivated by the idea that the model should predict the data and lends itself to comparing multiple models, thus contributing to computational theory. We will see below that the latter approach (chosen here) is less well suited to comparing alternative computational models. Why did Horikawa & Kamitani choose this approach? One argument might be that there are many model features and predicting the smaller number of voxels from these many features requires strong prior assumptions (implicit to regularisation), which might be questionable. The reverse prediction from voxels to features requires estimating the same total number of weights (# voxels * # model features), but each univariate linear model predicting a feature only has # voxels (i.e. typically fewer than # features) weights. Is this why you preferred this approach? Does it outperform the voxel-RF modelling approach of Kay et al. (2008) for decoding?

An even more important question is what we can learn about brain computations from feature decoding. If V4, say, perfectly predicted CNN1, this would suggest that V4 contains features similar to those in CNN1. However, it might additionally contain more complex features unrelated to CNN1. CNN1 predictability from V4, thus, would not imply that CNN1 can account for V4. Another example: CNN8 and GIST features are similarly predictable from voxel data across brain areas, and most predictable from V4 voxels. Does this mean GIST is as good a model as CNN8 for explaining the computational mechanism of the ventral stream? No. Even if the ventral-stream voxels perfectly predicted GIST, this would not imply that GIST perfectly predicts the ventral-stream voxels.

The important theoretical question is what computational mechanism gives rise to the representation in each area. For the human inferior temporal cortex, Khaligh-Razavi & Kriegeskorte (2015) showed that both GIST and the CNN representation explain significant variance. However, the GIST representation leaves a large portion of the explainable variance unexplained, whereas the CNN fully explains the explainable variance.

(4) Further explore the nature of the semantic space. To understand what drives the decoding of imagined categories, it would be helpful to see the performance of simpler analyses. Following Mitchell et al. (2008), one could use a text-corpus based semantic embedding to represent each of the categories. Decoding into this semantic embedding would similarly enable novel seen and imagined test categories (not used in training) to be decoded. It would be interesting, then, to successively reduce the dimensionality of the semantic embedding to estimate the complexity of the semantic space underlying the decoding. Alternatively, the authors’ WordNet distance could be used for decoding.

(5) Clarify that category-average patterns were used. The terms “image-based information” and “object-based information” are not ideal. By “image-based”, you are referring to a low-level visual representation and by “object-based”, to a categorical representation. Similarly, in many places where you say “objects” (as in “decoding objects”) it would be clearer to say “object categories”. Use clearer language throughout to clarify when it was category-average patterns that were used for prediction (brain representations) and that were predicted (model representations). This concerns the text and the figures. For example, the title of Fig. 4 should be: “Object-category-average feature decoding”. If this detracts the casual reader too lazy to even read the legends too much, at least the text of the legend should clearly state that category-average brain activity patterns are used to predict category-average model features.

(6) What are the assumptions implicit to sparse linear regression and is this approach optimal? L2 regularisation would spread the weights out over more voxels and might benefit from averaging out the noise component. Please comment on this choice and on any alternative performance results you may have.

 

Minor points

(7) The work is related to Mitchell et al. (2008), who predicted semantic semantic brain representations of novel stimuli using a semantic space model. This paper should be cited.

(8) “These studies showed a high representational similarity between the top layer of a convolutional neural network and visual cortical activity in the inferior temporal (IT) cortex of humans [24,25] and non-human primates [22,23].”

Ref 24 showed this for both human fMRI and macaque cell-recording data.

(9) “Interestingly, mid-level features were the most useful in identifying object categories, suggesting the significant contributions of mid-level representations in accurate object identification.”

This sentence repeats the same point after “suggesting”.