Projects

Here you can find the list of the projects that I am working on but haven't pulished yet.

Neural Network Insights in Motor Learning and Rapid Re-Learning

[code]

Summary: Savings is shown by having participants first reach in a null field (NF1) to establish baseline performance (usually straight-line reaches). Next they learn to reach in a force field (FF1), for example a velocity-dependent ‘curl field’. Then they “washout” their FF learning by reaching again in a null field (NF2). Washout returns their behavioural performance to baseline levels (equivalent to NF1). Following washout they reach in the force field again (FF2). Savings is demonstrated as faster re-learning of the force field in FF2 as compared to learning rate in FF1. Most accounts of savings appeal to some kind of explicit strategic component of learning, or meta-learning, or some such thing. Recent papers such as the Sun et al. (2022) paper appeal to a different idea, that high-dimensional neural networks can retain learning-related activity in a subspace that is orthogonal to behavioural performance-related activity. So in this way “washout” washes out only behavioural performance not the learning-related changes in the network, which remain, at least to some extent such that re-learning occurs faster, using this preserved learning-related activity. The idea of this project is first to use MotorNet to show that a RNN trained on FF learning and washout demonstrates savings at the behavioural level. Second to analyse network-related activity to identify network-related acitvity that is preserved even after washout, which contributes to savings (faster re-learning). Third, to test ideas about what features of a network and/or training are needed for savings to occur (e.g. dinensionality of network, training completeness, etc).

Anatomically-informed spatial noise models improve inference for multi-voxel pattern analysis

Authors: Mahdiyar Shahbazi, Lingling Lin, Jörn Diedrichsen

[poseter] and [code]

TLDR: We propose an anatomically-informed model to enhance multivoxel pattern analysis (MVPA), refining the estimation of spatial noise covariance in brain activity patterns. By accounting for 3D/2D distances and cortical depth, our model surpasses traditional methods, improving reliability in functional MRI experiments. This approach provides more accurate insights into information representation in brain activity patterns.

Abstract: Multi-voxel pattern analysis (MVPA) provides a powerful framework for making statistical inferences on the information present in brain activity patterns as measured by functional magnetic resonance imaging (fMRI). Many recent studies suggest that MVPA performance benefits from taking into account the spatial voxel-to-voxel correlations in the measurement noise. However, estimating these noise correlations is challenging due to the limited data points and large voxel counts. To address this issue, it is common practice to shrink the empirical correlation estimate towards its diagonal (i.e., the identity matrix), which biases the estimate towards the incorrect assumption that voxels are independent. We therefore propose an anatomically-informed model of measurement noise in fMRI, which takes into account the distances of voxels in the measurement volume, their distance on the cortical sheet, and the depth at which they sample the cortex. Notably, our model can predict the noise-correlation structure in completely new participants and datasets from different scanners with different resolutions. Our results indicate that it improves the noise correlation estimate when used as a shrinkage target, thereby potentially improving statistical inferences in MVPA.

Positive Semi-Definite Convolution: A Novel Structural Regularization Technique for Deep Convolutional Networks

Authors: Hosein Hassani, Mahdiyar Shahbazi, Behrad Moniri, Mahdieh Soleymani Baghshah, Hamid Aghajan

[preprint]

TLDR: We introduce a novel regularization technique by enforcing Positive Semi-Definite (PSD) constraints on CNN convolution kernels. Our experiments show enhanced classification accuracy, reduced generalization gap, and improved robustness to adversarial attacks. We discuss incorporating rank constraints and highlight the benefits of PSD constraints for optimizing very deep networks.

Abstract: Various regularization methods have been introduced to improve the training of deep neural networks and increase their generalization capability. In this paper, we propose a novel structural regularization technique through imposing Positive Semi-Definite (PSD) constraints on convolution kernels of deep Convolutional Neural Networks (CNNs). We also introduce a proper initialization scheme for PSD kernels. Our experiments on image classification benchmarks show that utilizing PSD convolutions as a hard regularization constraint enhances the classification accuracy and decreases the generalization gap of networks. We discuss how rank constraints can be incorporated into PSD convolutions and study the effect of such constraints on the number of parameters and network accuracy. We also demonstrate that networks equipped with PSD convolutions are more robust to adversarial attacks. Finally, we show how PSD constraints can also enhance the optimization procedure for very deep networks.

Repeating movement sequences facilitates both effector-dependent and -independent processes

Authors: Mahdiyar Shahbazi, Giacomo Ariani, Andrew Pruszynski, Jörn Diedrichsen

[poseter #1] and [poster #2]

TLDR: We found that repeating movement sequences accelerates initiation and execution, especially within the same hand, indicating the influence of effector-dependent processes. The sequence-dependent effect is prominent for repeated segments longer than 2 movements based on our experiments.

Abstract: Immediate repetition of a movement sequence leads to faster initiation and execution time; however, it is unclear whether effector-independent cognitive processes, such as working memory, or effector-dependent action-based processes, such as motor planning, are facilitated. To investigate this, we employed a delayed response paradigm instructing human participants to generate sequences of finger movements with either hand. In the first experiment, in some trials, we withheld the information about the performing hand until the go-signal. By varying the sequence length from 1-6, we could dissociate how effector-dependent or independent processes change the triggering and execution time. In the second experiment, from one trial to the next, participants either repeated the same sequence within the same hand the same sequence but in the other hand or produced a new random sequence. Although cross-hand repetition had a large and significant effect on movement execution, the effect was still smaller than within-hand repetition, suggesting the facilitation of effector-dependent processes in sequence repetition. In the last experiment, we asked whether the repetition effects happen at the single movement level or sequence. Participants produced 11-item sequences where sub-sequences of variable length (1, 2, 4, 6, 11) could repeat in proceeding trials. We observed the repetition effect on repeated segments if the length was longer than 2, suggesting the sequence-dependent nature of the repetition effect and repeated segments can be utilized flexibly.

References

Sun et al. (2022). Cortical preparatory activity indexes learned motor memories. Nature.