Hanan Samet, University of Maryland, United States
Title: Place-based Information Systems - Textual Location Identification and Visualization
Nello Cristianini, University of Bristol, United Kingdom
Title: ThinkBIG - Understanding the Impact of Big Data on Science and Society
Marcello Pelillo, University of Venice, Italy
Title: Towards a Holistic Theory of Pattern Recognition - A Game-theoretic Perspective
Luis Alexandre, University of Beira Interior, Portugal
Title: 3D Computer Vision - From Points to Concepts
Place-based Information Systems - Textual Location Identification and Visualization
University of Maryland
Hanan Samet (http://www.cs.umd.edu/~hjs/) is a Distinguished University Professor of Computer Science at the University of Maryland, College Park and is a member of the Institute for Computer Studies. He is also a member of the Computer Vision Laboratory at the Center for Automation Research where he leads a number of research projects on the use of hierarchical data structures for database applications involving spatial data. He has a Ph.D. from Stanford University. His doctoral dissertation dealt with proving the correctness of translations of LISP programs which was the first work in translation validation and the related concept of proof carrying code. He is the author of the recent book "Foundations of Multidimensional and Metric Data Structures" published by Morgan-Kaufmann, San Francisco, CA, in 2006 (http://www.mkp.com/multidimensional), an award winner in the 2006 best book in Computer and Information Science competition of the Professional and Scholarly Publishers (PSP) Group of the American Publishers Association (AAP), and of the first two books on spatial data structures titled "Design and Analysis of Spatial Data Structures" and "Applications of Spatial Data Structures: Computer Graphics, Image Processing and GIS" published by Addison-Wesley, Reading, MA, 1990. He is the Founding Editor-In-Chief of the ACM Transactions on Spatial Algorithms and System (TSAS), the founding chair of ACM SIGSPATIAL, a recipient of the 2014 IEEE Computer Society McDowell Award, 2011 ACM Paris Kanellakis Theory and Practice Award, 2009 UCGIS Research Award, 2010 CMPS Board of Visitors Award at the University of Maryland, a Fellow of the ACM, IEEE, AAAS, UCGIS, IAPR (International Association for Pattern Recognition), and an ACM Distinguished Speaker. He received best paper awards in the 2007 Computers & Graphics Journal, the 2008 ACM SIGMOD and SIGSPATIAL Conferences, the 2012 SIGSPATIAL MobiGIS Workshop, the 2013 SIGSPATIAL GIR Workshop, as well as best demo paper at the 2011 SIGSPATIAL Conference. His paper at the 2009 IEEE International Conference on Data Engineering (ICDE) was selected as one of the best papers for publication in the IEEE Transactions on Knowledge and Data Engineering.
The popularity of web-based mapping services such as Google Earth/Maps and Microsoft Virtual Earth (Bing), has led to an increasing awareness of the importance of location data and its incorporation into both web-based search applications and the databases that support them in the past, attention to location data had been primarily limited to geographic information systems (GIS), where locations correspond to spatial objects and are usually specified geometrically.
However, in the web-based applications, the location data often corresponds to place names and is usually specified textually.
An advantage of such a specification is that the same specification can be used regardless of whether the place name is to be interpreted as a point or a region. Thus the place name acts as a polymorphic data type in the parlance of programming languages. However, its drawback is that it is ambiguous. In particular, a given specification may have several interpretations, not all of which are names of places. For example, ''Jordan'' may refer to both a person as well as a place.
Moreover, there is additional ambiguity when the specification has a place name interpretation. For example, ''Jordan'' can refer to a river or a country while there are a number of cities named ''London''.
In this talk we examine the extension of GIS concepts to textually specified location data and review search engines that we have developed to retrieve documents where the similarity criterion is not based solely on exact match of elements of the query string but instead also based on spatial proximity. Thus we want to take advantage of spatial synonyms so that, for example, a query seeking a rock concert in Tel Aviv would be satisfied by a result finding a rock concert in Herzliyah of Petach Tikva. This idea has been applied by us to develop the STEWARD (Spatio-Textual Extraction on the Web Aiding Retrieval of Documents) system for finding documents on website of the Department of Housing and Urban Development. This system relies on the presence of a document tagger that automatically identifies spatial references in text, pdf, word, and other unstructured documents. The thesaurus for the document tagger is a collection of publicly available data sets forming a gazetteer containing the names of places in the world. Search results are ranked according to the extent to which they satisfy the query, which is determined in part by the prevalent spatial entities that are present in the document. The same ideas have also been adapted by us to collections of news articles as well as Twitter tweets resulting in the NewsStand and TwitterStand systems, respectively, which will be demonstrated along with the STEWARD system in conjunction with a discussion of some of the underlying issues that arose and the techniques used in their implementation. Future work involves applying these ideas to spreadsheet data.
ThinkBIG - Understanding the Impact of Big Data on Science and Society
University of Bristol
Nello Cristianini is a Professor of Artificial Intelligence in the Intelligent Systems Laboratory of the University of Bristol, He is the co-author of three popular books in machine learning and bioinformatics, and a recipient of the Royal Society Research Merit Award and of a ERC advanced grant. His current research involves technical and philosophical questions arising from the Big Data revolution, and the automated analysis of vast quantities of newspaper articles.
Computers can now do things that their programmers cannot explain or understand in detail: today's Artificial Intelligence has found a way to bypass the need for understanding a phenomenon before we can replicate it in a computer. The technology that made this possible is machine learning: a method to program computers by showing them examples of the desired behaviour. And the fuel that powers it all is DATA. For this reason, data has been called the new oil: a new natural resource, that businesses and scientists alike can leverage, by feeding it to massive learning computers to do things that we do not understand well enough to implement them with a traditional program. This new way of working - often called Big Data - is all about predicting, not explaining. It is about knowing what a new drug will do to a patient, not why.
While the opportunities created by this approach are becoming clear, and the technological challenges are gradually being met, we still struggle to come to terms with the implications of this transition. Was not science meant to help us make sense of the world? Or is it just meant to deliver good predictions? Are patterns in data capable of replacing theoretical understading? It is also important to remember that the fuel that powers this revolution is very often our own personal data, and that we still do not have a clear cultural framework to think about all this.
Towards a Holistic Theory of Pattern Recognition - A Game-theoretic Perspective
University of Venice
Marcello Pelillo is a Full Professor of Computer Science at Ca’ Foscari University in Venice, Italy, where he leads the Computer Vision and Pattern Recognition group. He is the Director of the Center for Knowledge, Interaction, and Intelligent Systems (KIIS). He held visiting research positions at Yale University, McGill University, the University of Vienna, York University (UK), the University College London, and the National ICT Australia (NICTA). He has published more than 150 technical papers in refereed journals, handbooks, and conference proceedings in the areas of pattern recognition, computer vision and neural computation. He has initiated several conference series, including EMMCVPR in 1997 (Energy Minimization Methods in Computer Vision and Pattern Recognition), IWCV in 2008 (International Workshop on Computer Vision), SIMBAD in 2011 (Similarity-Based Pattern Analysis and Recognition), and he chairs the EMMCVPR and SIMBAD steering committees. He has organized several workshops as Program Chair, including workshops at NIPS (1999, 2011) and ICML (2010). He is (has been) General Chair for ICCV 2017, Area Chair for ICPR 2014 and ICIAP 2015, Program Chair for S+SSPR 2014, and Publicity Chair for ECCV 2012. He has been tutorial lecturer at CVPR (2011), ECCV (2012), ICPR (2010 and 2014), and ICIAP (2011). He serves (has served) on the Editorial Board for the journals IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), IET Computer Vision, Pattern Recognition, and Brain Informatics, and he serves on the Advisory Board of the International Journal of Machine Learning and Cybernetics. He has served (serves) as Guest Editor for various special issues of IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), IEEE Transactions on Neural Networks and Learning Systems (TNNLS), Pattern Recognition, Pattern Recognition Letters, and is regularly on the program committees of the major international conferences and workshops in his area. He is (or has been) scientific coordinator of several research projects, including SIMBAD, an EU-FP7 project devoted to similarity-based pattern analysis and recognition whose activity is described in a new Springer book. Prof. Pelillo is a Fellow of the IEEE and a Fellow of the IAPR.
Traditional pattern recognition and machine learning techniques take typically a reductionist view, in the sense that they tend to view each object in isolation, and are inherently essentialist, in the sense that they are deeply grounded in the assumption that the objects populating the world are endowed with a number of “essential” (intrinsic) features which determine the category to which they belong. Starting from the common-sense observation that in the real world objects do not live in a vacuum, and that contextual, or relational, constraints do provide a rich and often unexplored source of information, I’ll argue for a holistic perspective to pattern recognition, with a view to overcome the limitations of today’s approaches. I’ll maintain that game theory offers an elegant and general conceptual framework that serves well this purpose and I’ll provide game-theoretic formulations of some specific pattern recognition and machine learning problems.
3D Computer Vision - From Points to Concepts
University of Beira Interior
Luís A. Alexandre received his BSc, MSc and PhD all from the University of Porto, Portugal on Physics/Applied Mathematics (1994), Industrial Informatics (1997) and Electrical Engineering and Computers (2002), respectively.
His current research interests are pattern recognition, deep neural networks and 3d computer vision.
He is the author of more than 80 research papers in international journals and conferences and is the leader of a research lab at the University of Beira Interior, Portugal, where he is currently an Associate Professor and the head of the Department of Informatics.
He is a member of the Portuguese Association for Pattern Recognition, the International Neural Network Society and has served as member of the Executive Committee of the European Neural Network Society.
The emergence of cheap structured light sensors, like the Kinect, opened the door to an increased interest in all matters related to the processing of 3D visual data. Applications for these technologies are abundant, from robot vision to 3D scanning.
In this presentation we will go through the main steps used on a typical 3D vision system, from sensors and point clouds up to understanding the scene contents, including key point detectors, descriptors, set distances, object segmentation, recognition and tracking and the biological motivation for some of these methods. We will present several approaches developed at our lab and discuss some current challenges.