My name is Arthur Bražinskas, I’m a 3rd (and the final) year natural language processing Ph.D. researcher working on abstractive opinion summarization. I’m fortunate to be part of the ILCC group at the University of Edinburgh and supervised by Ivan Titov and Mirella Lapata.

I focus on low-resource settings where annotated datasets are scarce yet large amounts of unannotated data is available. In these settings, the model learns to summarize without direct supervision or from a few examples. My research is driven by practical problems, and I aim to develop new machine learning methods based on the interplay of theory and practice.

On the machine learning side, I’m interested in Bayesian approaches that model data in terms of random/stochastic variables. These models can naturally capture uncertainty and represent information that is not directly observable in datasets. For training, my methods of choice are variational inference (VAE) and reinforcement learning.

I graduated (MSc. \w distinction) in artificial intelligence from the University of Amsterdam, Netherlands, where I specialized in theoretical machine learning and natural language processing. Before starting my Ph.D., I worked on machine learning modeling at Elsevier, Amazon, and Zalando. I also interned at Amazon under the supervision of R. Nallapati, M. Bansal, M. Dreyer.


Started working at Google as a Research Scientist
Paper "Efficient Few-Shot Fine-Tuning for Opinion Summarization" was accepted to NAACL 2022
Tutorial on Opinion Summarization was accepted to SIGIR 2022
Invited research talk at ILLC Amsterdam
Lecture on Text Summarization at Yandex (YSDA)