Home      Log In      Contacts      FAQs      INSTICC Portal
 

Keynote Lectures

Keynote Lecture
Mihaela van der Schaar, John Humphrey Plummer Professor of Machine Learning, AI, and Medicine, University of Cambridge, United Kingdom

Keynote Lecture
Krystian Mikolajczyk, Imperial College London, United Kingdom

Keynote Lecture
Tinne Tuytelaars, KU Leuven, Belgium

Image and Video Generation: A deep Learning Approach
Nicu Sebe, University of Trento, Italy

 

Keynote Lecture

Mihaela van der Schaar
John Humphrey Plummer Professor of Machine Learning, AI, and Medicine, University of Cambridge
United Kingdom
 

Brief Bio
Mihaela van der Schaar is the John Humphrey Plummer Professor of Machine Learning, Artificial Intelligence and Medicine at the University of Cambridge, a Fellow at The Alan Turing Institute in London, and a Chancellor’s Professor at UCLA.
Mihaela was elected IEEE Fellow in 2009. She has received numerous awards, including the Oon Prize on Preventative Medicine from the University of Cambridge (2018), a National Science Foundation CAREER Award (2004), 3 IBM Faculty Awards, the IBM Exploratory Stream Analytics Innovation Award, the Philips Make a Difference Award and several best paper awards, including the IEEE Darlington Award.
Mihaela’s work has also led to 35 USA patents (many widely cited and adopted in standards) and 45+ contributions to international standards for which she received 3 International ISO (International Organization for Standardization) Awards.
In 2019, she was identified by National Endowment for Science, Technology and the Arts as the most-cited female AI researcher in the UK. She was also elected as a 2019 “Star in Computer Networking and Communications” by N²Women. Her research expertise spans signal and image processing, communication networks, network science, multimedia, game theory, distributed systems, machine learning and AI.
Mihaela’s research focus is on machine learning, AI and operations research for healthcare and medicine.
In addition to leading the van der Schaar Lab, Mihaela is founder and director of the Cambridge Centre for AI in Medicine (CCAIM).


Abstract
Available soon.



 

 

Keynote Lecture

Krystian Mikolajczyk
Imperial College London
United Kingdom
 

Brief Bio
Available soon.


Abstract
Available soon.



 

 

Keynote Lecture

Tinne Tuytelaars
KU Leuven
Belgium
 

Brief Bio
Available soon.


Abstract
Available soon.



 

 

Image and Video Generation: A deep Learning Approach

Nicu Sebe
University of Trento
Italy
 

Brief Bio
Nicu Sebe is a professor in the University of Trento, Italy, where he is leading the research in the areas of multimedia information retrieval and human-computer interaction in computer vision applications. He received his PhD from the University of Leiden, The Netherlands and has been involved in the past with the University of Amsterdam, The Netherlands and the University of Illinois at Urbana-Champaign, USA. He was involved in the organization of the major conferences and workshops addressing the computer vision and human-centered aspects of multimedia information retrieval, among which as a General Co-Chair of the IEEE Automatic Face and Gesture Recognition Conference, FG 2008, ACM International Conference on Multimedia Retrieval (ICMR) 2017 and ACM Multimedia 2013. He was a program chair of ACM Multimedia 2011 and 2007, ECCV 2016, ICCV 2017 and ICPR 2020. He is a general chair of ACM Multimedia 2022 and a program chair of ECCV 2024. Currently he is the ACM SIGMM vice chair, a fellow of IAPR and a Senior member of ACM and IEEE.


Abstract
Video generation consists of generating a video sequence so that an object in a source image is animated according to some external information (a conditioning label or the motion of a driving video). In this talk I will present some of our recent achievements adressing these specific aspects: 1) generating facial expressions, e.g., smiles that are different from each other (e.g., spontaneous, tense, etc.) using diversity as the driving force. 2) generating videos without using any annotation or prior information about the specific object to animate. Once trained on a set of videos depicting objects of the same category (e.g. faces, human bodies), our method can be applied to any object of this class. To achieve this, we decouple appearance and motion information using a self-supervised formulation. To support complex motions, we use a representation consisting of a set of learned keypoints along with their local affine transformations. A generator network models occlusions arising during target motions and combines the appearance extracted from the source image and the motion derived from the driving video. Our solutions score best on diverse benchmarks and on a variety of object categories.



footer