Publications

You can also find my articles on my Scholar Profile.

Transformers Use Causal World Models in Maze-Solving Tasks

Spies, A.F., Edwards, W., Ivanitskiy, M.I., Skapars, A., Räuker, T., Inoue, K., Russo, A., Shanahan, M.

ICLR 2025 World Models Workshop 2024

We identify World Models in transformers trained on maze-solving tasks, using Sparse Autoencoders and attention pattern analysis to examine their construction and demonstrate consistency between feature-based and circuit-based analyses.

PDF
Figure from Transformers Use Causal World Models in Maze-Solving Tasks

Structured World Representations in Maze-Solving Transformers

Ivanitskiy, M.I.*, Spies, A.F.*, Räuker, T.*

NeurIPS 2023 UniReps Workshop 2023

Transformers trained to solve mazes form linear representations of maze structure, and acquire interpretable attention heads which facilitate path-following.

Figure from Structured World Representations in Maze-Solving Transformers

A Configurable Library for Generating and Manipulating Maze Datasets

Ivanitskiy, M.I., Shah, R., Spies, A.F.

arXiv preprint 2023

A dataset to generate simple Maze-like environments for use with Transformers.

Figure from A Configurable Library for Generating and Manipulating Maze Datasets

Sparse Relational Reasoning with Object-Centric Representations

Spies, A.F., Russo, A., Shanahan, M.

ICML 2023 DyNN Workshop (Spotlight) 2022

Assessing the extent to which sparsity and structured (Obect-Centric) representations are beneficial for neural relational reasoning.

Figure from Sparse Relational Reasoning with Object-Centric Representations

Nonlocal Thresholds for Improving the Spatial Resolution of Pixel Detectors

Nachman, B., Spies, A.F.

Journal of Instrumentation 2019

Investigating the potential use of charge sharing between neighboring pixels in HEP sensors to increase resolution and radiation hardness.

PDF
Figure from Nonlocal Thresholds for Improving the Spatial Resolution of Pixel Detectors