My name is Arthur Bražinskas (pronounced Bra [zh] inskas), I’m a natural language processing Ph.D. researcher focusing on applications of deep generative models to abstractive text summarization. I’m part of the ILCC group at the University of Edinburgh and supervised by Ivan Titov and Mirella Lapata. I focus on low-resource settings where annotated datasets are scarce yet large amounts of unannotated data are available. In these settings, the model learns the process of summarization without direct supervision or from a few examples. This is believed to be the way humans generalize to related tasks.

I'm interested in statistical (Bayesian) machine learning approaches that model data in terms of random variables both observable and hidden, which are organized in Bayesian networks. These models have solid foundation in information theory and have more recently been fueled by neural networks as function approximators. Commonly, to optimize parameters and perform inference from such models approximate methods are applied, where amortized inference (VAE) is my preference.

I graduated (MSc. \w distinction) in artificial intelligence from the University of Amsterdam, Netherlands, where I specialized in theoretical machine learning and natural language processing. Before joining ILCC I worked on machine learning modeling at Elsevier, Amazon, and Zalando.

Few-Shot Learning for Opinion Summarization
Arthur Bražinskas, Mirella Lapata, Ivan Titov In EMNLP 2020
Unsupervised Opinion Summarization as Copycat-Review Generation
Arthur Bražinskas, Mirella Lapata, Ivan Titov In ACL 2020
Embedding Words as Distributions with a Bayesian Skip-gram Model
Arthur Bražinskas, Serhii Havrylov, Ivan Titov In COLING 2018