Journal Information
Computer Vision and Image Understanding (CVIU)
http://www.journals.elsevier.com/computer-vision-and-image-understanding/
Impact Factor:
3.876
Publisher:
Elsevier
ISSN:
1077-3142
Viewed:
23947
Tracked:
63
Call For Papers
The central focus of this journal is the computer analysis of pictorial information. Computer Vision and Image Understanding publishes papers covering all aspects of image analysis from the low-level, iconic processes of early vision to the high-level, symbolic processes of recognition and interpretation. A wide range of topics in the image understanding area is covered, including papers offering insights that differ from predominant views.

Research Areas Include:

• Theory
• Early vision
• Data structures and representations
• Shape
• Range
• Motion
• Matching and recognition
• Architecture and languages
• Vision systems
Last updated by Dou Sun in 2022-01-29
Special Issues
Special Issue on Advances in Deep Learning for Human-Centric Visual Understanding
Submission Date: 2024-04-30

Our daily lives revolve around people. One of artificial intelligence's primary goals is to create intelligent machines that enable humans to accomplish more and to live better lives. This requires machines to comprehend people’s emotional and physical characteristics, behaviors, and daily activities, among other things. As a result, human-centric visual comprehension is a critical and long-standing area of research in computer vision and artificial intelligence. It has a plethora of critical applications in our society, including security and safety, health care, and human-machine interfaces. Recent advances in deep learning have led to efficient and effective tools for dealing with the variability and complexity inherent in real-world environments. While significant progress has been made, there is still a significant gap in order to address complex human-centric visual reasoning tasks (e.g., understanding human-object interaction, analyzing human body language) and new challenges (e.g., face forgery detection). Thus, now is an excellent time to refocus research efforts on more comprehensive and in-depth human-centric visual comprehension, and ultimately on socially intelligent machines. We welcome submissions of high-quality papers that introduce significant new theories, methods, applications, and insights into a variety of human-centric perception, reasoning, and analysis tasks. Possible subjects include, but are not limited to: Human semantic parsing/fashion recognition Human pose/shape estimation Human activity recognition and trajectory prediction Face detection/facial landmark detection/deepfake detection Pedestrian detection/tracking/recognition/retrieval/re-identification Human-object/-human interaction understanding Human gaze/facial/body behavior analysis Human visual attention mechanisms Human-centric image/video synthesis New benchmark datasets and survey papers related to the aforementioned topics Guest editors: Wenguan Wang, PhDZhejiang University, Hangzhou, China Si Liu, PhDBeihang University, Beijing, China Xiaojun Chang, PhDUniversity of Technology Sydney, Sydney, Australia David Crandall, PhDIndiana University, Bloomington, United States of America Haibin Ling, PhDStony Brook University, Stony Brook, United States of America
Last updated by Dou Sun in 2023-10-03
Special Issue on Trustworthy Cross-Modal Reasoning for Video-Language Understanding
Submission Date: 2024-04-30

Video-language understanding and reasoning are long-standing problems for the CV and Multimedia communities. By endowing an AI machine with the crossmodality reasoning ability for video-language understanding, AI researchers expect the machine to “think” like a human and then make trustable decisions. However, most existing efforts primarily aim to improve in-domain performance while overlooking how to truly capture the essence of cross-modal reasoning. Especially the fundamental question in video-language understanding (Whether the model simply learns multimodal correlations hidden in datasets and whether it yields reliable in-domain results?) is usually overlooked by researchers and has yet to be well answered. Therefore, this special issue covers the continual growth of research, primarily related to the robustness, fairness, explainability, and security of video-oriented cross-modal reasoning. The purpose of this special issue is to solicit high-quality, high-impact, and original papers on current developments in cross-modal reasoning for video-language understanding. We are interested in submissions covering topics of particular interest that include but are not limited to the following: New datasets for trustworthy video-language understanding Adversarial learning for robust multimodal representation New methods for robust video summarization Cross-modal semantics-consistent representation learning Domain generalization in video-language understanding Causal learning for trustworthy multimodal reasoning Unfair bias measurement and mitigation in video-language understanding Explainable multimodal data fusion and interaction Brain-inspired networks for explainable cross-modal reasoning Trustworthy reasoning algorithm in video dialog Knowledge-driven explainable cross-modal reasoning Text-guided visual-textual reasoning and generation Privacy protection and security control in cross-modal AIGC Applications of trustworthy video-language understanding Guest editors: Dan Guo, PhDHefei University of Technology, Hefei, China Zhun Zhong, PhDUniversity of Nottingham, Nottingham, United Kingdom Subhankar Roy, PhDTélécom Paris, Paris, France, Linchao Zhu, PhDZhejiang University, Hangzhou, China, Chuang Gan, PhDUMass Amherst, Amherst, United States of AmericaMIT-IBM Watson AI Lab, Cambridge, United States of America Meng Wang, PhDHefei University of Technology, Hefei, China
Last updated by Dou Sun in 2023-10-03
Special Issue on Advanced Computational Imaging and Photography Measurement
Submission Date: 2024-09-15

Computational photography revolutionizes digital image capture and processing by harnessing artificial intelligence to enhance traditional optical-based imaging. This approach opens up a diverse array of possibilities to augment camera capabilities, enabling functionalities previously unattainable in film-based photography, while also reducing the cost and size of camera components. Powered by recent artificial intelligence techniques, cameras have become ubiquitous in the modern world. These cameras incorporate features such as 3D imaging, HDR imaging systems, depth of field control, extended field of view, noise removal, and super-resolution. Despite the growing number of related research papers, many issues still remain, and new problems emerge. Directly migrating the recent methodologies and technologies, such as the Diffusion model, 3D Gaussian Splatting, and LLM, may not achieve reasonable performance due to the unique characteristics of new image sensors and camera systems. There is ample room for improvement in contemporary theories, methodologies, and applications for computational photography and intelligent imaging. This special issue aims to explore the latest advancements in photography measurement and imaging within the context of the data-driven age. We invite submissions from authors exploring the advances of artificial intelligence technologies to benefit applications ranging from sensing to image reconstruction. Our goal is to facilitate connections across broad and cross-disciplinary research areas, including photometric reconstruction, computational optical imaging, low-level computer vision, immersive media, signal processing, as well as current and emerging techniques and technologies in these domains. The list of possible topics includes, but is not limited to: Guest editors: Yakun Ju, PhDNanyang Technological University, Singapore, Singapore Bihan Wen, PhDNanyang Technological University, Singapore, Singapore Wuyuan Xie, PhDShenzhen University, Shenzhen, China Shiqi Wang, PhDCity University of Hong Kong, Hongkong, China Alex Chichung Kot, PhDNanyang Technological University, Singapore, Singapore
Last updated by Dou Sun in 2024-04-01
Related Journals
Related Conferences
Recommendation