My name is Arthur Bražinskas (pronounced Bra [zh] inskas), I’m a 3rd (and the final) year natural language processing Ph.D. researcher working on latent probabilistic models for abstractive opinion summarization. I’m part of the ILCC group at the University of Edinburgh and supervised by Ivan Titov and Mirella Lapata. Specifically, I focus on low-resource settings where annotated datasets are scarce yet large amounts of unannotated data is available. In these settings, the model learns to summarize without direct supervision or from a few examples.
Due to the lack of annotated summaries, opinion summarization was historically approached using extractive methods that construct summaries from review fragments. During my Ph.D., I focus on abstractive models that generate summaries using a free vocabulary. In turn, such models can fuse and compress often conflicting user opinions. I also introduced a number of datasets for training and evaluation.
On the machine learning side, I’m interested in Bayesian approaches that model data in terms of random/stochastic variables. These models can naturally capture uncertainty and represent information that is not directly observable in datasets. For training and inference, my methods of choice are variational inference (VAE) and reinforcement learning.
I graduated (MSc. \w distinction) in artificial intelligence from the University of Amsterdam, Netherlands, where I specialized in theoretical machine learning and natural language processing. Before starting my Ph.D., I worked on machine learning modeling at Elsevier, Amazon, and Zalando. I also closely collaborate with Amazon Alexa AI teams in Seattle, USA.