Special sessions instructions
The ICDSC Conference offers technical presentations by the world’s leading scientists and engineers on diverse topics related to distributed smart cameras.
Beyond the core emphasis areas of the conference, ICDSC will provide specific special sessions. These special sessions introduce conference attendees to relevant 'hot' topics which may not be covered in other sessions, and topics which could generate instructive debate on smart cameras technology and its applications.
Manuscripts may be submitted electronically to the special sessions and should conform to the formatting and electronic submission guidelines of regular ICDSC papers. Papers will undergo the same review process as regular papers. It is the responsibility of the organizers to ensure that their Special Session papers meet ICDSC quality standards.
Special session 1: Architecture for Machine Vision
Depending on their applications, machine vision systems may require strong processing capabilities, high data rates, low energy consumption, strict real-time processing, or any combination of the above.
These constraints make the design and exploitation of hardware architectures particularly challenging in the context of machine vision. Topics of interest for this special session include (but are not limited to):
- Dedicated HW design for machine vision
- Low energy machine vision
- Machine vision design space exploration
- Model-based design of a machine vision system
Dr. Maxime Pelcat is an Associate Professor at IETR/INSA Rennes and at the Institut Pascal in Clermont Ferrand. He is an author of 30+ peer reviewed publications since 2009 in the domains of models of computation, energy efficiency, multimedia and telecommunication processing, and programming of parallel embedded systems.
Dr. Jonathan Piat is an Associate Professor at the department of Electrical and Computer Engineering at Paul Sabatier University of Toulouse (France). He holds a joint appointment in the Laboratory for Analysis and Architecture of Systems, a CNRS research unit. His work deals with the integration of robotics algorithms on FPGA-based dedicated hardware. His main research interests include dataflow-models, hardware/software co-design, embedded vision processing and robotics.
Special session 2: High Quality video acquisitions and the impact on embedded image processing
Nowadays the constant improvements of image processing technics enable new applications to be sorted using such technology. Meanwhile the acquisition step is still a crucial stage which impacts directly the quality of the image processing. Obviously, high quality images are required if information is to be extracted. High quality requires specific acquisition systems to be designed. The processing can interact with the acquisition to control the sensor. High quality is commonly obtained by increasing the frame rate, the spatial resolution, or the image dynamic range or reducing acquisition noise. Non-conventional image acquisitions (color, multi-spectral, …) can be considered as good candidates to provide the extra information. Therefore the generation of high quality videos generally induces a significant data increase of the video stream. It is already a challenge to process in real-time the information provided by a standard sensor and this challenge is even bigger when the information is larger.
The smart sensors or cameras should be in charge of the generation of such high quality video stream or should be in charge to process them. Therefore this special session aims to focus on the advances on high quality video cameras/sensors and/or the real-time processing of the resulting video flows. The session proposes for instance to consider different kinds of acquisitions such as:
- High-speed image acquisition or large spatial resolution (i.e. 4K or 8K video) generation
- Low-noise image generation
- High Dynamic Range imaging (HDR)
- Non-conventional acquisition for high quality video generation
Dr. Julien Dubois is an Associate Professor at the Univ. Bourgogne Franche-Comté since 2003. He is a member of the Laboratory Le2i (UMR CNRS 6306). He is currently in charge of the smart camera research field in Le2i. His research interests include real-time implementation, smart camera, high-quality video generation and processing, image compression and more recently real-time remote plethysmography. In 2001, he received PhD in Electronics from the University Jean Monnet of Saint Etienne (France) and joined EPFL based in Lausanne (Switzerland) as a project leader to develop a co-processor, based on FPGA, for a new CMOS camera.
Pr. Dom Ginhac is Full Professor at the Univ. Bourgogne Franche-Comté and director of the Laboratory Le2i (UMR CNRS 6306). His research activities were first in the field of rapid prototyping of real-time image processing on dedicated parallel architectures. More recently, he has developed an expertise in the field of image acquisition, hardware design of smart vision systems and implementation of real-time image processing applications.
Special session 3: Neuromorphic chips and algorithms
Many living beings benefit from extended visual and attentional capabilities, demonstrating the ability to quickly detect and tracks objects of interest in complex environments. They nevertheless rely on principles that largely differ from classical computer architectures, for instance exploiting huge assemblies of neurons processing information in parallel.
Relying on dedicated or generic VLSI technologies, we can mimic of take inspiration from such living systems to embed their functional and organizational principles on vision chip and smart cameras. Topics of interest for this special session include (but are not limited to):
- Neuromorphic engineering (chips and algorithms)
- Hardware and algorithm co-design of neural principles
- Neuro-inspired models of visual perception and visual attention
Dr. Jean-Charles Quinton is an Associate Professor at LJK / Université Grenoble Alpes and at Institut Pascal in Clermont-Ferrand. He works at the interface between computer vision, computational neuroscience and cognitive psychology, studying principles found in living beings to apply them on computational artifacts.
Benoît Chappet de Vangel is a PhD student at the Université de Lorraine/LORIA in Nancy. His main topic is the study of neuro-inspired computation models and their hardware implementation for robust embedded applications using different approaches like neuromorphic engineering or cellular computing.
Special session 4: Low Power CMOS Imagers for IoT Vision Systems
From an electronic system point of view, IoT context pushes the minimization of the power consumption at an unknown level nowadays, with respect of the highest service quality as possible. Vision systems for IoT applications have to respect this huge constraint: provide the best entropy with the lowest power budget.
Current CMOS image sensors cannot respect those characteristics and new imager architectures have been proposed in the SoA recently. Those architectures cover a range of power between 10pJ/pixel.frame (ultra-low power imager) and 100pJ/pixel.frame (low power imager) when current imagers are above 1nJ/pixel.frame.
However, a Low Power imager doesn’t mean a Low Power Camera. In order to minimize the power consumption of the overall vision system, i.e. the camera, an energy efficient approach is to provide image processing within the sensor.
Based on oral presentations, this special session proposes to present recent results obtained in this topic, and to discuss about future trends, design and industrial constraints.
Gilles SICARD (PhD, Univ. Grenoble Alpes – CEA LETI, Grenoble, France) After 15 years as associate professor with Joseph Fourier University and TIMA Laboratory (Grenoble, France), G. Sicard joined, in 2014, CEA-LETI as senior expert in the Image sensor and Display laboratory (L3I). His current research interest is on smart vision circuits mainly on HDR, light adaptive CMOS sensors and Low Power Architectures for vision systems. He is authored or co-authored of more than 90 papers in international conferences and journals. Jérome CHOSSAT (STMicroelectronics, Grenoble, France)
J.Chossat is working for STMicroelectronics since 1996. He has been pioneering in the development of low footprint/low power integrated image signal processors for the mobile phone market, and had a key contribution in defining the architecture for a family of optimized ISP, and directly contributed to the very first generation of high volume mobile phone camera system (more than 750 Million units produced). Jerome is currently at the head of the department responsible for digital architecture, embedded Software, Image Signal Processing and Computer Vision algorithms in Imaging Division, STM. His current research topics are various and include low power ISP solutions, camera systems for IoT market, and HDR camera solutions for automotive market.