PhD Student in Queen Mary University of London

Supervisor: Dr. Ioannis Patras


Research assistant in Information and Technologies Institute of CERTH(ITI-CERTH)

Research and development of machine learning algorithms for video analysis and semantic-based video annotation and retrieval.


MSc in Advanced Computing: Machine Learning and Data Mining

University of Bristol, Department of Computer Science

Supervisor: Professor Peter Flach


BSc in Informatics

Aristotle University of Thessaloniki, Department of Informatics

Supervisor: Doctor Grigorios Tsoumakas

Bio: I received a BSc in Informatics from Aristotle University of Thessaloniki in 2011 and an MSc in Machine Learning, Data Mining & High performance computing from University of Bristol. My research is in the area of Machine Learning and Pattern recognition with application in Multimedia Analysis. Specifically, I am working on Concept-based Image and Video Annotation-Retrieval. My research interests include various aspects of Machine Learning like Transfer learning, Ensemble methods and Multi-label learning.


Deep Multi-task Learning with Label Correlation Constraint for Video Concept Detection

F. Markatopoulou, V. Mezaris, I. Patras, ACM Multimedia 2016, Amsterdam

Abstract. In this work we propose a method that integrates multi-task learning (MTL) and deep learning. Our method appends a MTL-like loss to a deep convolutional neural network, in order to learn the relations between tasks together at the same time, and also incorporates the label correlations between pairs of tasks. We apply the proposed method on a transfer learning scenario, where our objective is to fine-tune the parameters of a network that has been originally trained on a large-scale image dataset for concept detection, so that it be applied on a target video dataset and a corresponding new set of target concepts. We evaluate the proposed method for the video concept detection problem on the TRECVID 2013 Semantic Indexing dataset. Our results show that the proposed algorithm leads to better concept-based video annotation than existing state-of-the-art methods.


Online Multi-Task Learning for Semantic Concept Detection in Video

F. Markatopoulou, V. Mezaris, I. Patras, IEEE Int. Conf. on Image Processing (ICIP 2016), Phoenix, AZ, USA,

Abstract. In this paper we propose an online multi-task learning algorithm for video concept detection. In particular, we extend the Efficient Lifelong Learning Algorithm (ELLA) in the following ways: a) we solve the objective function of ELLA using quadratic programming instead of solving the Lasso problem, b) we add a new label-based constraint that considers concept correlations, c) we use linear SVMs as base learners instead of logistic regression. Experimental results show improvement over both the single-task learning methods typically used in this problem and the original ELLA algorithm.


Ordering of Visual Descriptors in a Classifier Cascade Towards Improved Video Concept Detection

F. Markatopoulou, V. Mezaris, I. Patras, Int. Conf. on MultiMedia Modeling (MMM'16), Miami, FL, USA

Abstract. Concept detection for semantic annotation of video fragments (e.g. keyframes) is a popular and challenging problem. A variety of visual features is typically extracted and combined in order to learn the relation between feature-based keyframe representations and semantic concepts. In recent years the available pool of features has increased rapidly, and features based on deep convolutional neural networks in combination with other visual descriptors have significantly contributed to improved concept detection accuracy. This work proposes an algorithm that dynamically selects, orders and combines many base classifiers, trained independently with different feature-based keyframe representations, in a cascade architecture for video concept detection. The proposed cascade is more accurate and computationally more ef- ficient, in terms of classifier evaluations, than state-of-the-art classifier combination approaches.


Cascade of classifiers based on Binary, Non-binary and Deep Convolutional Network descriptors for video concept detection

F. Markatopoulou, V. Mezaris, I. Patras, IEEE Int. Conf. on Image Processing (ICIP 2015), Quebec City, Canada, 2015

Abstract. In this paper we propose a cascade architecture that can be used to train and combine different visual descriptors (local binary, local non-binary and Deep Convolutional Neural Network-based) for video concept detection. The proposed architecture is computationally more efficient than typical state-of-the-art video concept detection systems, without affecting the detection accuracy. In addition, this work presents a detailed study on combining descriptors based on Deep Convolutional Neural Networks with other popular local descriptors, both within a cascade and when using different latefusion schemes. We evaluate our methods on the extensive video dataset of the 2013 TRECVID Semantic Indexing Task.


A Study on the Use of a Binary Local Descriptor and Color Extensions of Local Descriptors for Video Concept Detection

F. Markatopoulou, N. Pittaras, O. Papadopoulou, V. Mezaris, I. Patras, Int. Conf. on MultiMedia Modeling (MMM'15), Sydney, Australia

Abstract. In this work we deal with the problem of how different local descriptors can be extended, used and combined for improving the effectiveness of video concept detection. The main contributions of this work are: 1) We examine how effectively a binary local descriptor, namely ORB, which was originally proposed for similarity matching between local image patches, can be used in the task of video concept detection. 2) Based on a previously proposed paradigm for introducing color extensions of SIFT, we define in the same way color extensions for two other non-binary or binary local descriptors (SURF, ORB), and we experimentally show that this is a generally applicable paradigm. 3) In order to enable the efficient use and combination of these color extensions within a state-of-the-art concept detection methodology (VLAD), we study and compare two possible approaches for reducing the color descriptor’s dimensionality using PCA. We evaluate the proposed techniques on the dataset of the 2013 Semantic Indexing Task of TRECVID.


Full list of publications available [here]