Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC 2009)

30 August - 2 September, 2009     Como (Italy)

Important dates
News
Keynotes Tutorials Programme


Keynotes

Monday, August 31, 2009 - (09:10 - 10:00)

Urban Surveillance Networks: a challenge for video analytics technologies

Dr. Arun Hampapur, IBM

Abstract

The most visible and pervasive cameras networks today are evolving in metropolitan cities. In these networks cameras cover 100's of square miles of densely populated urban areas. These cameras networks have evolved from a heterogeneous technology base, starting from analog camera and encoders to IP cameras and fiber optic networks. While the challenges of building large scale networked surveillance system are enormous, they dwarf the challenge of making sense out of the 1000's of video feeds that are being captured and stored. The challenge using automatic video analysis and pattern recognition technologies in surveillance video is made many orders of magnitude more complex by the high activity levels that occur within the field of view of urban surveillance cameras. In this talk, we will describe the real world implementation of one of the most advanced video analytics system applied to urban surveillance. The talk will begin by providing the background of a complex urban surveillance network and discuss the various use cases for analytics in surveillance networks. The second part of the talk highlights the various technical challenges involved in video analytics in urban environments. The talk will conclude with demonstrations of customer implementations video analytics technology and discuss key areas of research needed in computer vision, video indexing and data management to take urban surveillance networks to the next level.

Bio.

hampapur.jpg

Dr. Arun Hampapur is an IBM Distinguished Engineer. He is currently leading work in Security and Information Analytics Research at IBM Watson Research. Additionally, he works with IBM's Global Technology Services Division on Physical Security Technologies and Services. Dr Hampapur led the research team which invented IBM Smart Surveillance System (S3) at IBM T.J Watson Research Center. He has led the S3 effort from its inception as an exploratory research effort, thru the building of the first lab prototype, first customer pilot engagement, to the commercialization as a services offering. He has developed several algorithms for video analytics and video indexing. He has published more than 80 papers on various topics related to media indexing, video analysis, and video surveillance and holds 9 US patents and more than 70 patent applications. Dr Hampapur is an IEEE Senior Member. Dr Hampapur obtained his PhD from the University of Michigan in 1995

Tuesday, September 1, 2009 - (09:10 - 10:00)

Multi-sensor coordination and control

Prof. Mohan Kankanhalli, National University of Singapore

Abstract

There has been an increasing research interest in a number of multi-sensor applications like surveillance, video ethnography, tele-presence, assisted living, life blogging etc. Unfortunately, in many of these applications, the multiple sensors operate separately in isolation and only the central processing unit fuses the data obtained from the various sensors to accomplish its task. However, if we can coordinate and control these sensors, the system tasks can be achieved more efficiently with a higher accuracy. To demonstrate this, we will discuss one control and coordination strategy from a multimedia observation system perspective. This coopetitive approach combines the salient features of cooperation and competition with an aim to optimize the overall cooperation among sensors to achieve best results at the system level. We will show the use of a model predictive control based forward state estimation method for counter-acting various delays faced in such multi-sensor environments. We will then briefly present a design methodology for building systems which can explicitly take performance into account. This can aid in optimal selection and placement of multimedia sensors. Finally, we will introduce novel open problems in the area of multi-sensor coordination and control. One such problem is the "best-view" selection in the emerging area of cyber-physical systems that involve sensing, communication, control, and interaction with physical environments. Real-time selection of best viewpoints in a cyber-physical environment is very useful in many applications such as in conferencing systems, surveillance systems and interactive TV. To address this problem, we introduce a new image-based measure, Viewpoint Saliency, for evaluating view qualities of captured cyber-physical environments. And then, based on the new measure, we develop a feedback control based method for generating the best view through guided control of cameras.

Bio.

moh1999.gif

Mohan Kankanhalli is a Professor at the Department of Computer Science at the National University of Singapore. He is also the Vice-Dean for Academic Affairs and Graduate Studies at the NUS School of Computing. He obtained his BTech (Electrical Engineering) from the Indian Institute of Technology, Kharagpur and his MS/PhD (Computer and Systems Engineering) from the Rensselaer Polytechnic Institute. He is actively involved in the organization of many major conferences in the area of Multimedia. He is on the editorial boards of several journals including the ACM Transactions on Multimedia Computing, Communications, and Applications, IEEE Transactions on Multimedia, Springer Multimedia Systems Journal, Multimedia Tools and Applications, and the Pattern Recognition Journal. His current research interests are in Multimedia Systems (content processing, retrieval) and Multimedia Security (surveillance, digital rights management and forensics).

Wednesday, September 2, 2009 - (09:10 - 10:00)

PANOPTIC: An Omnidirectional Multi-Aperture Visual Sensor

Prof. Pierre Vandergheynst, Swiss Federal Institute of Technology

Abstract

A 2007 digital photography market study by IDC recently showed two interesting trends. First, global digital camera shipments have grown by about 15 percent in 2007, doubling the previous forecast of 7.5 percent and reversing a trend of declining growth seen over the past four years. Moreover, the imaging sensor market is booming mostly under the influence of camera phone sales. This all proves that imaging devices have become an integral part of our daily lives. But with high resolution sensors becoming cheaper (Nokia's current offering includes a 5 mega pixels camera phone) what future advances in imaging sensor technology could possibly help keep up the innovation pace?

We claim that integrating innovative imaging sensor designs and image processing technologies will enable radically new applications and will unleash the full potential of vision based systems. We propose and study a breakthrough visual sensor we call the panoptic camera. It is realized by layering CMOS sensors on the facets of an icosahedron-like surface: it is thus an array of micro-cameras, with a particular geometry. As an optical system, the panoptic camera has two distinguishing features. First it is an omnidirectional camera, in the sense that it is able to record light information coming from any direction around its centre. Second it is a polydyoptric system: each CMOS facet is a tiny camera with a distinct focal plane, hence the whole system is a multiple aperture camera. The layering is designed such that the field of view of each facet is overlapping with that of its neighbours. We will review why such an omnidirectional polydyoptric camera is ideal for certain inverse vision problems like ego-motion estimation or structure from motion. Moreover because of the overlapping fields of view of each aperture facet, the panoptic system is also a plenoptic camera: light rays coming from the same scene point will strike neighbouring sensors and carry information about the underlying plenoptic function that can be used to infer fine information about the scene itself, for example the depth map. We will derive, as an illustrative application, a correspondence-less algorithm for depth estimation that uses the unique properties of our system. Finally, we will highlight some of the future milestones we intend to reach with the next prototypes.

Bio.

pierre.jpg

Pierre Vandergheynst received the M.S. degree in physics and the Ph.D. degree in mathematical physics from the Université catholique de Louvain, Louvain-la-Neuve, Belgium, in 1995 and 1998, respectively. From 1998 to 2001, he was a Postdoctoral Researcher with the Signal Processing Laboratory, Swiss Federal Institute of Technology (EPFL), Lausanne, Switzerland. He was Assistant Professor at EPFL (2002-2007), where he is now an Associate Professor. His research focuses on harmonic analysis, sparse approximations and mathematical image processing with applications to higher dimensional, complex data processing. He was co-Editor-in-Chief of Signal Processing (2002-2006) and is Associate Editor of the IEEE Transactions on Signal Processing (2007-present). He has been on the Technical Committee of various conferences and was Co-General Chairman of the EUSIPCO 2008 conference. Pierre Vandergheynst is the author or co-author of more than 50 journal papers, one monograph and several book chapters. He's a senior member of the IEEE, a laureate of the Apple ARTS award and holds seven patents.