I am currently a Postdoctoral Researcher at the Queen Mary University of London and the Alan Turing Institute working on Tackling Misinformation using Natural Language Processing.
My current research is funded by UKRI trough the PANACEA project
I have completed a PhD in Computer Science supervised by Dr. Maria Liakata and Prof. Rob Procter with the Warwick Institute for the Science of Cities (WISC) CDT, funded by the Leverhulme Trust via the Bridges Programme. I was a visiting student at the Alan Turing Institute in London. My background is Applied Mathematics (BSc, MSc, Lobachevsky State University of Nizhny Novgorod) and Complexity Science (MSc, University of Warwick, Chalmers University).
The main focus of my research is on Tackling Misinformation using Natural Language Processing.
In my PhD I focused on Rumour Stance and Veracity Classification in social media conversations. Veracity classification means a task of identifying whether a given conversation discusses a True, False or Unverified rumour. Stance classification implies determining the attitude of responses discussing a rumour towards its veracity as either Supporting, Denying, Questioning or Commenting. In my work I study the relations between these tasks, as patterns of support and denial can be indicative of the final veracity label. As the input data is in the form of conversations discussing rumours, I utilise the conversation structure to enhance predictive models. I work with deep learning models as this approach allows flexible architectures and has benefits of representation learning. Recurrent and recursive neural networks allow to model time sequences and/or conversation tree-like structures.
Currently I am working on the "PANACEA: PANdemic Ai Claim vEracity Assessment" project, which aims to create an AI-enabled evidence-driven framework for claim veracity assessment during pandemics. Within the project I focus on (1) collecting COVID-19 related data from social media platforms and authoritative resources and (2) developing novel unsupervised/supervised approaches for veracity assessment by incorporating evidence from external sources.
I am also interested in general area of online harms, and tasks like propaganda detection and multimodal hate speech detection.
Please find list of my publications on Google Scholar
Here are some of the talks I have given about my research: