From VCIP 2017

(Publication/Update date Aug 01, 2017)

The IEEE Visual Communications and Image Processing (VCIP) Conference, sponsored by the IEEE Circuits and Systems Society and the IEEE Kansas City Section, will be held at the beautiful city of St. Petersburg, FL, USA, during December 10-13, 2017.

Since 1986, VCIP has served as a premier forum in visual communications and image processing for the exchange of fundamental research results and technological advances. VCIP has a long tradition in showcasing pioneering technology, envisioning the future of visual communication technology and applications. IEEE VCIP 2017 will carry on this tradition, providing fertile ground for leading engineers and scientists from around the world to escalate collaboratively the research frontiers within these and related areas. The main theme would be new media, including Virtual Reality (VR), point cloud capture and communication, and new visual processing tools including deep learning for visual information pre- and post-processing, including de-blurring, super resolution, and post capture image enhancement. Original and unpublished work relevant to, but not limited to, the following topics are hereby solicited:

• Next Generation Video/Image Coding

• New Media (Point Cloud Capture and Compression, VR, AR, etc.)

• Stereo and Multi-view Videos

• Low Delay and Multipath Video Communication

• Machine Learning for Visual Information Processing

• Image/Video Privacy

• Cloud Multimedia Systems, Applications and Services

• Mobile Multimedia, Wireless Multimedia Communications

• Multimedia Content Analysis, Representation, and Understanding

• Multimedia and Multimodal User Interfaces and Interaction Models

• Multimedia Networking, Multimedia Traffic Management

• Massively Scalable Multimedia Service Distribution, Multimedia Broadcasting

• Visual Processing Embedded Systems and Architecture Implementations


Submission of Regular and Demo Papers:

Prospective authors are invited to submit a four-page, full-length paper, including results, figures, and references. Furthermore, VCIP 2017 considers the submission of one-page demo papers with a description of proposed working systems.


Tutorial and Special Session Proposals:

Researchers, developers, and practitioners from related communities are invited to submit proposals for organizing tutorials and special sessions on various emerging topics described above. Tutorials will be held on Sunday, 10 December, 2017. Proposals must include title, outline, contact and biography information of the presenter, and a short description of the material to be distributed. Special session proposals must provide title, rationale, contact and biography information of the organizers, and a list of possible papers with tentative title and abstract. Special session papers will be regularly reviewed.


Papers accepted for regular, special, and demonstration sessions must be registered and presented to assure their inclusion in the IEEEXplore Library. Awards will be given to a number of papers with best quality. Please visit the VCIP 2017 website for additional information regarding paper and proposal formatting instructions and submission details.


Important Dates:

March 31, 2017               Submission of Proposals for Special Sessions

March 31, 2017               Submission of Proposals for Tutorials

April 14, 2017                  Acceptance Notification for Tutorials and Special Session Proposals

June 5, 2017                     Submission of Papers for Regular, Demo and Special Sessions

August 25, 2017              Acceptance Notification

September 15, 2017       Submission of Camera Ready Papers and Registration

Mailing Alert on September 30, 2016

(Publication/Update date Nov 11, 2016)

Special Issue on Deep Learning for Visual Surveillance

Visual surveillance has been long researched in the computer vision community. Focus on this topic derives from not only the theoretical challenge of the related technical problems, but also the practical effectiveness in real world applications. Particularly, with the popularity of large scale visual surveillance and intelligent traffic monitoring systems, videos captured by the static and dynamic cameras are required to be automatically analyzed. Recently, with the surge of deep learning, research on the visual surveillance under the paradigm of data driven learning reaches a new height.

Although there has been significant progress in this exciting field during the past years, many problems still remain unsolved. For instance, how to gather training samples for data intensive deep learning methods? How to adapt generic deep learning prototypes to specific deployments? How to compromise between online training computational load and classification accuracy?

In order to pursue first-class research outputs along this direction, this special issue aimed at inviting the original submissions on recent advances in deep learning based visual surveillance research and foster an increased attention to this field. It will provide the image/video community with a forum to present new academic research and industrial development in deep learning applications. The special issue will emphasize the incorporation of state-of-the-art deep learning methods such as Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), Deep Bayesian Networks (DBN), deep Restricted Boltzmann Machines (RBN), Long Short-Term Memory (LSTM), autoencoders, and their graphical model, sparse coding, and kernel machine based variants.


Initial Paper Submission:              January 31, 2017
Initial Paper Decision:                   April 30, 2017
Revised Paper Submission:          June 15, 2017
Revised Paper Decision:                July 30, 2017
Publication Date:                            January 2018

Guest Editors:

Fatih Porikli                       Australian National University & CSIRO, Australia
Larry Davis                        University of Maryland, USA
Qi Wang                             Northwestern Polytechnic University, China
Yi Li                                     Toyota Research Institute North America, USA
Carlo Regazzoni               University of Genova, Italy

Mailing Alert on May 24, 2016

(Publication/Update date Nov 11, 2016)

Special Issue on Large Scale and Nonlinear Similarity Learning for Intelligent Video Analysis

Learning similarity and distance measures have become increasingly important for the analysis, matching, retrieval, recognition, and categorization of video and multimedia data. With the ubiquitous use of digital imaging devices, mobile terminals and social networks, there are massive volumes of heterogeneous and homogeneous video and multimedia data from multiple sources, e.g., news media websites, microblog, mobile phone, social networking, etc. Moreover, the spatio-temporal coherence among video data can also be utilized for self-supervised learning of similarity and distance metrics. This trend has brought several challenging issues for developing similarity and metric learning methods for large scale and weakly annotated data, where outliers and incorrectly annotated data are inevitable. Recently, scalability has been investigated to cope with large scale metric learning, while nonlinear similarity models have shown their great potentials in learning invariant representation and nonlinear measures of video and multimedia data.

Although the studies on large scale and nonlinear similarity learning for intelligent video analysis are valuable for both research and industry, there are many fundamental problems remain unsolved. In order to pursue first-class research outputs along this direction, this special issue aimed at promoting the scaling-up of metric learning for massive video and multimedia data, the development of nonlinear and deep similarity learning models, the extension to semi-supervised, multi-task, and robust metric learning, and the applications to video-based face recognition, person re-identification, human activity recognition, multimedia retrieval, cross-media matching, and video surveillance. This special issue will provide the image/video community with a forum to present new academic research and industrial development in intelligent video analysis.


Initial Paper Submission:              December 31, 2016
Initial Paper Decision:                    March 31, 2017
Revised Paper Submission:          May 15, 2017
Revised Paper Decision:                June 30, 2017
Publication Date:                            December 2017 

Guest Editors:

Wangmeng Zuo               Harbin Institute of Technology, China
Liang Lin Sun                    Yat-Sen University, China
Alan L. Yuille                     The Johns Hopkins University, US
Horst Bischof                    Graz University of Technology, Austria
Lei Zhang                           Microsoft Research, US
Fatih Porikli                       Australian National University, Australia

Recent Posts

August 2, 2017
October 14, 2016
October 11, 2016