期刊信息
Pattern Recognition (PR)
https://www.sciencedirect.com/journal/pattern-recognition
影响因子:
7.6
出版商:
Elsevier
ISSN:
0031-3203
浏览:
103715
关注:
147
征稿
Pattern Recognition is a mature but exciting and fast developing field, which underpins developments in cognate fields such as computer vision, image processing, text and document analysis and neural networks. It is closely akin to machine learning, and also finds applications in fast emerging areas such as biometrics, bioinformatics, multimedia data analysis and most recently data science. The journal Pattern Recognition was established some 50 years ago, as the field emerged in the early years of computer science. Over the intervening years it has expanded considerably.

The journal accepts papers making original contributions to the theory, methodology and application of pattern recognition in any area, provided that the context of the work is both clearly explained and grounded in the pattern recognition literature. Papers whos primary concern falls outside the pattern recognition domain and which report routine applications of it using existing or well known methods, should be directed elsewhere. The publication policy is to publish (1) new original articles that have been appropriately reviewed by competent scientific people, (2) reviews of developments in the field, and (3) pedagogical papers covering specific areas of interest in pattern recognition. Various special issues will be organized from time to time on current topics of interest to Pattern Recognition. Submitted papers should be single column, double spaced, no less than 20 and no more than 35 (40 for a review) pages long, with numbered pages.
最后更新 Dou Sun 在 2025-12-18
Special Issues
Special Issue on Affective Computing in the Large-scale Pre-trained Model Era
截稿日期: 2025-12-31

Aim and Scope With the urgent demand for artificial general intelligence (AGI), many large-scale pre-trained models, such as generative pre-trained transformers (GPT), are being developed. Meanwhile, the research on emotion representation and understanding in artificial intelligence is attracting increasing attention from both academia and industry. However, directing applying large-scale pre-trained models to affective computing tasks usually does not outperform existing specialized or finetuned models. Additionally, compared with large language models (LLMs), the application of large audio and video models (LAMs and LVMs) in affective computing is still at an early stage. Affective computing using large-scale pre-trained models is rather challenging due to the following reasons: domain knowledge learned from general data is different from affective computing, so adequate approaches are needed for extracting desired affective knowledge, such as prompt engineering, efficient finetuning, and low-shot learning for downstream task adaptation; emotions are often jointly expressed and perceived through different modalities, such as vision, speech, and language, so effective pre-training frameworks that can collaboratively extract information from multiple modalities, need to be explored; as emotion is a subjective concept, affective computing involves a multi-disciplinary understanding of human perceptions and behaviors. This special issue seeks original contributions reporting the most recent progress on affective computing with large-scale pre-trained models. The topics of interest include, but are not limited to: Affective content understanding with models pre-trained using large-scale unimodal text, images, speech, or EEG data Affective content understanding with models pre-trained using large-scale multimodal data Foundation models trained on large-scale affective data Frameworks for model training with large-scale data for affective computing problems Fusion strategies for pre-training and fine-tuning affective foundation models Prompt engineering for using large-scale pre-trained models for affective computing problems Zero-shot and few-shot learning with large-scale pre-trained models for affective computing problems Emotion-aware artificial intelligence-generated content Large-scale affective data collection and annotation Evaluation metric design for affective computing Affective computing-based applications in entertainment, digital avatars, robotics, education, health care, and biometrics etc. Important Dates Submission deadline: December 31, 2025 First notification: March 31, 2026 Revision submission: May 31, 2026 Notification of acceptance: August 31, 2026 Anticipated publication: October 2026 Guest editors: Dr. Sicheng Zhao, PhD Tsinghua University, Beijing, China E-mail: schzhao@tsinghua.edu.cn Prof. Shiqing Zhang, PhD Taizhou University, Taizhou, China E-mail: zhangshiqing@tzc.edu.cn Prof. Qi Tian, PhD Huawei Cloud, Guiyang, China E-mail: tian.qi1@huawei.com Prof. Björn W. Schuller, PhD Imperial College London, London, UK E-mail: schuller@ieee.org
最后更新 Dou Sun 在 2025-12-18
Special Issue on Foundation Models and Prompting for Visual Tasks in Harsh Conditions
截稿日期: 2026-03-01

The emergence of foundation models (e.g., CLIP, DINO, SAM, BLIP, Segment Anything, and Diffusion Models) has revolutionized visual representation learning, enabling zero-shot transfer and unified modeling across diverse tasks. However, their robustness and adaptability under harsh visual conditions—such as extreme low-light, fog, underwater, motion blur, and low-resolution domains—remain under-explored and highly application-dependent. This special issue invites original research articles that explore how foundation models and prompt-based learning paradigms can be adapted, enhanced, or fine-tuned to perform robustly under degraded visual environments. We encourage both theoretical and practical contributions that: Extend vision-language or vision-only foundation models for image enhancement, detection, segmentation, or captioning under adverse conditions. Propose novel prompting, tuning, or adapter strategies that improve robustness to degradation. Benchmark foundation models on real-world datasets with harsh imaging artifacts. Leverage multimodal priors (e.g., audio, LiDAR, thermal, text) to enhance foundation model performance in challenging settings. Develop trustworthy, explainable, and uncertainty-aware pipelines for safety-critical visual tasks. Applications of interest include, but are not limited to: autonomous driving at night, medical image reconstruction from low-quality scans, underwater perception, surveillance in rain/fog, or cultural heritage restoration. Guest editors: Min Gan, PhD Qingdao University, Qingdao, China Email: aganmin@gmail.com (Visual perception in adverse conditions, image processing, optimization and learning theory) Mingde Yao, PhD Chinese University of Hong Kong, Hong Kong, China Email: mingdeyao@foxmail.com (Computational photography, image processing) Long Chen, PhD University of Macau, Macau, China Email: longchen@umac.mo Computational Intelligence, image processing, modeling and control, AI for chemo/biosensing C. L. Philip Chen, PhD South China University of Technology, Guangzhou, China Email: Philip.Chen@ieee.org (Computational intelligence, intelligent control cybernetics, intelligent transportation systems, data science/engineering) Shuzhi Sam Ge, PhD National University of Singapore, Singapore, Singapore Email: samge@nus.edu.sg (Autonomous robotics intelligence, control intelligent interactive media fusion, education software development) Manuscript submission information: The journal submission system (Editorial Manager®) will be open for submissions to our Special Issue from October 1st, 2025. When submitting your manuscript please select the article type VSI: FM & Prompting. Both the Guide for Authors and the submission portal could be found on the Journal Homepage: Guide for authors - Pattern Recognition - ISSN 0031-3203 | ScienceDirect.com by Elsevier. Important dates Submission Portal Open: October 01, 2025 Submission Deadline: March 01, 2026 Acceptance Deadline: May 01, 2026 Keywords: Harsh Conditions, Foundation Models, Multimodal
最后更新 Dou Sun 在 2025-12-18
Special Issue on Spatial Embodied Intelligence of Unmanned Systems in Open Urban Environments
截稿日期: 2026-03-20

Spatial embodied intelligence has become an emerging and highly popular keyword in recent years. It enables embodied agents to perceive, reason about, and interact with the surrounding environment, playing a crucial role in human society and physical world. Spatial embodied intelligence is a critical piece of the AI puzzle, the related studies of that have attracted much attention in the pattern recognition (PR) community, including but not limited to multi-view clustering, multimodal information fusion, spatial feature extraction, landmark recognition, semantic clustering, visionlanguage alignment, spatial localization and mapping (SLAM), visual reasoning and generative models. Most of the models/approaches are fundamentally dependent on core techniques in pattern recognition. This evolving paradigm now drives innovations across applied domains, such as visual-language navigation, active object search, embodied question & answering, social interaction. Though modern LLMs have made notable progress with world knowledge, planning and reasoning capabilities, and powerful generalization across diverse embodied tasks in static and indoor environments, persistent challenges remain in deploying these LLM-based embodied agents for large-scale, dynamic outdoor scenarios. Therefore, this Special Issue focuses on spatial embodied intelligence of unmanned systems (UAVs, UGVs, robots) in open urban environments and calls for papers that study several attractive, natural, and urgent questions: (1) what are the theoretical foundations and recent technological advancements to spatial embodied intelligence; (2) To what extent can existing multimodal large models match human-level performance in executing embodied tasks in outdoor environments; (3) How can task performance of embodied agents be enhanced through approaches such as integration of large and small models, combining fast and slow thinking, finetuning, and post-training? Simulators/testbeds, high-quality task datasets, and metrics for evaluating LLM-based embodied agents in open urban environments; Pre-training multimodal perception large models with multi-source data (e.g., text, vision, depth, point clouds) for unmanned systems in open urban environments; Post-training large reason models with human spatial cognition for unmanned systems in open urban environments; Unified vision-language-action models for embodied tasks in open urban environments; Embodied perception and multimodal spatial representation methods; Multi-view clustering and fusion methods for coordinated urban spatial understanding; Multimodal chain-of-thought methods for reasoning and planning; Combination of LLMs and CV/PR models for decision-making in open urban environments; Human-agent/multi-agent collaborative methods for urban embodied task execution; Downstream applications based on LLM-based embodied agents, such as navigation, search & rescue, question & answering, logistics, and surveillance; Bridging the Sim-to-Real Gap: parallel benchmarks of unmanned systems powered by LLMs. Guest editors: Dr. Hongyuan Zhang The University of Hong Kong, Hong Kong, Hong Kong Prof. Jian Zhao Northwestern Polytechnical University, Xi'an, China Dr. Fanglong Yao Aerospace Information Research Institute, Beijing, China Dr. Mingyu Ding The University of North Carolina at Chapel Hill, Chapel Hill, United States Dr. Zhengqiu Zhu National University of Defense Technology China, Changsha, China Manuscript submission information: Open for Submission: from 01-Oct-2025 to 20-Mar-2026 Submission Site: Editorial Manager® Article Type Name: "VSI: PR_ Spatial Embodied Intelligence" - please select this item when you submit manuscripts online All manuscripts will be peer-reviewed. Submissions will be evaluated based on originality, significance, technical quality, and clarity. Once accepted, articles will be posted online immediately and published in a journal regular issue within weeks. Articles will also be simultaneously collected in the online special issue. For any inquiries about the appropriateness of contribution topics, welcome to contact Leading Guest Editor (Dr. Hongyuan Zhang). Guide for Authors will be helpful for your future contributions, read more: Guide for authors - Pattern Recognition - ISSN 0031-3203 | ScienceDirect.com by Elsevier For more information about our Journal, please visit our ScienceDirect Page: Pattern Recognition | Journal | ScienceDirect.com by Elsevier Keywords: Spatial Embodied Intelligence; Embodied AI
最后更新 Dou Sun 在 2025-12-18
Special Issue on Foundation Models for Anomaly Detection, Reasoning, and Recovery
截稿日期: 2026-03-30

Foundation models such as large multimodal language models (MLLMs), diffusion models, and pre-trained visual transformers are reshaping the landscape of machine learning, computer vision, and AI-driven analysis. This Special Issue of Pattern Recognition explores the next frontier: integrating anomaly detection with reasoning and recovery to create proactive, explainable, and resilient systems. We invite contributions that move beyond anomaly identification to tackle causal reasoning, counterfactual analysis, and recovery strategies. By uniting these elements, this Special Issue aims to inspire breakthroughs that enable real-world, safety-critical systems—such as those in industrial inspection, medical imaging, robotics, and autonomous vehicles—to detect, understand, and recover from anomalies in a robust and adaptive manner. This Special Issue seeks original research, surveys, and application papers that address the full lifecycle of anomaly management—detection, reasoning, and recovery—powered by foundation models. We encourage submissions that advance the theoretical foundations, algorithms, and practical implementations of anomaly detection and its integration with reasoning and recovery. The scope includes, but is not limited to, the following themes: Anomaly Detection with Foundation Models: Leveraging MLLMs, diffusion models, and pre-trained visual transformers (e.g., CLIP, DINO, SAM) for robust anomaly detection. Multimodal Anomaly Perception and Understanding: Combining visual, textual, and sensor data for comprehensive anomaly interpretation. Anomaly Reasoning: Causal inference, explainability, counterfactual reasoning, and root-cause analysis for actionable insights. Anomaly Recovery and Resolution: Repair strategies, adaptive control, replanning, and human-in-the-loop solutions for resilient system operation. Learning Paradigms for Anomaly Management: Few-shot/zero-shot anomaly generalization, continual and online learning, and prompt engineering for anomaly tasks. Benchmarks and Evaluation: Metrics and datasets for integrated anomaly detection, reasoning, and recovery evaluation. Application Case Studies: Industrial vision inspection, medical imaging diagnostics, robotics, autonomous systems, and other safety-critical domains. Frontier Topics: Out-of-distribution detection, contextual anomaly analysis, and foundation model applications beyond anomaly scenarios. This Special Issue will bring together contributions that are methodological, theoretical, and applied, with the common goal of pushing the state of the art toward interpretable, adaptive, and resilient anomaly-aware systems. Guest editors: Guest editors: Dr. Yunkang Cao Hunan University, Changsha, China Prof. Chao Huang Sun Yat-Sen University, Shenzhen, China Dr. Giacomo Boracchi Politecnico di Milano, Milan, Italy Assist. Prof. Guansong Pang Singapore Management University, Singapore City, Singapore Dr. Jie Wen Harbin Institute of Technology Shenzhen, Shenzhen, China Manuscript submission information: Open for Submission: from 24-Sep-2025 to 30-Mar-2026 Submission Site: Editorial Manager® Article Type Name: "VSI: Anomaly Detection, Reasoning, and Recovery" - please select this item when you submit manuscripts online All manuscripts will be peer-reviewed. Submissions will be evaluated based on originality, significance, technical quality, and clarity. Once accepted, articles will be posted online immediately and published in a journal regular issue within weeks. Articles will also be simultaneously collected in the online special issue. For any inquiries about the appropriateness of contribution topics, welcome to contact Leading Guest Editor (Dr. Yunkang Cao). Guide for Authors will be helpful for your future contributions, read more: Guide for authors - Pattern Recognition - ISSN 0031-3203 | ScienceDirect.com by Elsevier For more information about our Journal, please visit our ScienceDirect Page: Pattern Recognition | Journal | ScienceDirect.com by Elsevier Keywords: Foundation Models; Anomaly Detection; Anomaly Reasoning; Anomaly Recovery; Causal Inference; Explainability; Counterfactual Analysis; Multimodal Learning; Zero-Shot Learning; Few-Shot Learning; Out-of-Distribution Detection; Industrial Vision
最后更新 Dou Sun 在 2025-12-18
Special Issue on Evaluate4Science: Scientific Evaluation as a Pathway to Understanding and Discovery
截稿日期: 2026-04-15

With the accelerated adoption of large language models and multimodal systems such as GPT-4 and Claude in scientific research, the systematic evaluation of AI capabilities for scientific tasks has become a cutting-edge interdisciplinary challenge. Unlike traditional "AI for Science" approaches, this Special Issue advocates framing evaluation itself as a pattern recognition problem—leveraging systematic evaluation design to probe model capability boundaries, guide scientific modeling strategies, and optimize research workflows. Scientific tasks naturally exhibit strong structural and physical consistency, making them ideal for developing fine-grained, interpretable capability assessments. This Special Issue aims to establish a platform for deep integration between artificial intelligence and the natural sciences, promoting the development of trustworthy, controllable, and generalizable scientific AI systems. We cordially invite high-quality paper submissions in (but not limited to) the following research areas: Evaluation Methods for Large Models in Scientific Tasks - Designing representative evaluation tasks for complex domains (e.g., physical modeling, biomolecular prediction, materials generation, causal discovery) - Benchmarking and testing generalization across diverse scientific domains Multidimensional Evaluation Metrics and Capability Profiling - Constructing comprehensive metric systems tailored to scientific reasoning - Systematic attribution of model trustworthiness, uncertainty, and failure patterns Structured Task Design and Reasoning Ability Assessment - Developing structured scientific tasks based on causal chains, differential equations, functional expressions, and other domain-specific constraints - Evaluating model capabilities through symbolic regression, formula generation, and graph-structured representations Multimodal Scientific Reasoning and Collaborative Evaluation - Analyzing reasoning capabilities in tasks involving tables, images, text, and mathematical expressions as input modalities - Evaluating multimodal understanding in scenarios such as scientific knowledge graph construction, text-image consistency, and data-driven concept alignment Human-AI Comparison and Cognitive Behavior Modeling - Investigating differences in strategy, error patterns, and revision behaviors between models and human experts - Building human-AI comparative experiments using eye-tracking, multi-turn interaction logs, and other cognitive signals Scientific Workflow Feedback and Model-Science Coupling Mechanisms - Leveraging evaluation insights to guide experimental design, task refinement, and optimization of scientific knowledge structures- Developing model-informed scientific assistance systems and generative design frameworks Coupling Evaluation with Model Training Strategies - Using evaluation signals to optimize training pipelines (e.g., prompt engineering, data selection, fine-tuning) - Analyzing the controllability and capability scheduling of architectures such as MoE and LoRA in scientific modeling contexts Guest editors: Prof. Xin Zhao Affiliation: University of Science and Technology Beijing, Beijing, China Areas of expertise: Computer vision, video analysis, object tracking, performance evaluation, visual benchmarks Prof. Yin Li Affiliation: University of Wisconsin–Madison, Madison, USA Areas of expertise: Computer vision, medical artificial intelligence, machine learning Prof. Xiaoxu Zhao Affiliation: Peking University, Beijing, China Areas of expertise: Atomic and electronic structure of 2D materials, STEM/EELS characterization, materials data analytics, computer vision and machine learning for materials science Prof. Hongteng Xu Affiliation: Renmin University of China, Beijing, China Areas of expertise: Machine learning, optimal transport theory, geometric algebra, neural network design, AI4Science Prof. Wanli Ouyang Affiliation: The Chinese University of Hong Kong, Hong Kong, China Areas of expertise: AI for Science, computer vision, pattern recognition, machine learning Manuscript submission information: Open for Submission: from 15-Nov-2025 to 15-Apr-2026 Submission Site: Editorial Manager® Article Type Name: "VSI: PR_Evaluate4Science" - please select this item when you submit manuscripts online All manuscripts will be peer-reviewed. Submissions will be evaluated based on originality, significance, technical quality, and clarity. Once accepted, articles will be posted online immediately and published in a journal regular issue within weeks. Articles will also be simultaneously collected in the online special issue. For any inquiries about the appropriateness of contribution topics, welcome to contact Leading Guest Editor (Prof. Xin Zhao). Guide for Authors will be helpful for your future contributions, read more: Guide for authors - Pattern Recognition - ISSN 0031-3203 | ScienceDirect.com by Elsevier For more information about our Journal, please visit our ScienceDirect Page: Pattern Recognition | Journal | ScienceDirect.com by Elsevier Keywords: Scientific Evaluation; AI4Science; Model Assessment; AI System
最后更新 Dou Sun 在 2025-12-18
Special Issue on Security and Trustworthiness in Pattern Recognition: Attacks, Evaluations, and Defenses
截稿日期: 2026-04-30

Pattern recognition lies at the core of artificial intelligence (AI) and machine learning, supporting a wide range of applications including computer vision, speech recognition, natural language processing, medical diagnosis, and financial risk management. However, as deep learning models become increasingly complex and widely deployed, the security and trustworthiness of pattern recognition systems have emerged as critical concerns. Research has shown that these systems are highly vulnerable to adversarial attacks, backdoor insertion, data poisoning, model extraction, and privacy leakage, all of which can severely compromise recognition accuracy and system reliability. This Special Issue seeks to bring together contributions from academia and industry to advance the understanding of security issues in pattern recognition models, providing a comprehensive forum for innovative research, systematic surveys, and application-driven studies that address threats, defenses, and trustworthy deployment of AI systems. This Special Issue focuses on emerging challenges in securing AI models against adversarial attacks, backdoors, data poisoning, model extraction, and privacy leakage. Topics of interest include: Adversarial attacks and defenses in pattern recognition (image classification, object detection, face recognition, speech recognition, etc.) Data-level threats and protections: data poisoning, privacy leakage, and countermeasures Model-level threats and defenses: backdoor detection and mitigation, model extraction, and intellectual property protection Privacy-preserving pattern recognition: differential privacy, federated learning, and secure multi-party computation Security-aware model compression and deployment in resource-constrained environments Explainability and robustness: leveraging interpretability to enhance security and trustworthiness Emerging trustworthiness challenges in cross-modal pattern recognition Evaluation frameworks and benchmark construction for model security and trustworthiness in pattern recognition Case studies and real-world applications: medical diagnosis, autonomous driving, financial fraud detection, public safety, and beyond Agent Security: Security issues in large language model-driven intelligent agents operating in open environments Guest editors: Dr. Zhongliang Guo University of St Andrews, St Andrews, United Kingdom Dr. Ognjen Arandjelović University of St Andrews, St Andrews, United Kingdom Dr. Shuai Zhao Nanyang Technological University, Singapore City, Singapore Dr. Piotr Koniusz Commonwealth Scientific and Industrial Research Organisation (CSIRO), Canberra, Australia Dr. Dong Yuan The University of Sydney, Sydney, Australia Manuscript submission information: Open for Submission: from 01-Nov-2025 to 30-Apr-2026 Submission Site: Editorial Manager® Article Type Name: "VSI: PR_Security and Trustworthiness" - please select this item when you submit manuscripts online All manuscripts will be peer-reviewed. Submissions will be evaluated based on originality, significance, technical quality, and clarity. Once accepted, articles will be posted online immediately and published in a journal regular issue within weeks. Articles will also be simultaneously collected in the online special issue. For any inquiries about the appropriateness of contribution topics, welcome to contact Leading Guest Editor (Zhongliang Guo). Guide for Authors will be helpful for your future contributions, read more: Guide for authors - Pattern Recognition - ISSN 0031-3203 | ScienceDirect.com by Elsevier For more information about our Journal, please visit our ScienceDirect Page: Pattern Recognition | Journal | ScienceDirect.com by Elsevier Keywords: Pattern Recognition; AI Security; Trustworthy AI; Adversarial Attacks; Backdoor Defense; Privacy-Preserving Learning
最后更新 Dou Sun 在 2025-12-18
Special Issue on Advancements in Multi-Source Heterogeneous Data Fusion for Pattern Recognition
截稿日期: 2026-09-30

The exponential growth of heterogeneous data sources—images, video, audio, sensor signals, time-series, graphs, and IoT streams—has created an urgent demand for effective multi-source data fusion. Researchers and practitioners now face the challenge of integrating disparate modalities that are often asynchronous, noisy, and incomplete. Traditional approaches remain insufficient for such heterogeneity. More advanced, scalable, and trustworthy methods are needed to ensure robustness, adaptability, and interpretability in real-world systems, including healthcare, autonomous driving, robotics, smart cities, and environmental monitoring. At the same time, large foundation models (LLMs, VLMs, MLLMs) are opening unprecedented opportunities for multimodal representation learning and cross-domain transfer. Yet, their application to heterogeneous fusion still requires solving issues of modality alignment, temporal/spatial synchronization, schema mismatches, and uncertainty management. This special issue will highlight the latest advances in multi-source fusion, spanning theoretical foundations, algorithms, architectures, and practical deployments. It will provide a venue for showcasing robust, efficient, and explainable solutions that can transform pattern recognition research and industry. Topics of Interest include, but are not limited to: Fusion algorithms for multi-source heterogeneous data Efficient and scalable fusion architectures Handling data heterogeneity, incompleteness, and misalignment Uncertainty, causality, and interpretability in data fusion Trustworthy, fair, and responsible fusion systems Multimodal learning with foundation models Evaluation metrics, datasets, and benchmarking protocols Applications in healthcare, robotics, transportation, and smart cities Emerging trends in cross-domain and multimodal data fusion Guest editors: Dr. Xiaohan Yu Macquarie University, Sydney, Australia Prof. Dr. Xiao Bai Beihang University, Beijing, China Dr. Imad Rida BMBI - Biomecanique et Bioingenierie, Compiegne, France Prof. Amir Hussain Edinburgh Napier University, Edinburgh, United Kingdom Manuscript submission information: Open for Submission: from 1-Mar-2026 to 30-Sep-2026 Submission Site: Editorial Manager® Article Type Name: "VSI: PR_Heterogeneous Data Fusion" - please select this item when you submit manuscripts online All manuscripts will be peer-reviewed. Submissions will be evaluated based on originality, significance, technical quality, and clarity. Once accepted, articles will be posted online immediately and published in a journal regular issue within weeks. Articles will also be simultaneously collected in the online special issue. For any inquiries about the appropriateness of contribution topics, welcome to contact Leading Guest Editor (Dr. Xiaohan Yu). Guide for Authors will be helpful for your future contributions, read more: Guide for authors - Pattern Recognition - ISSN 0031-3203 | ScienceDirect.com by Elsevier For more information about our Journal, please visit our ScienceDirect Page: Pattern Recognition | Journal | ScienceDirect.com by Elsevier Keywords: multi-source heterogeneous data cross-domain and multimodal multimodal learning
最后更新 Dou Sun 在 2025-12-18
Special Issue on Evolving Multi-View Learning: From Theory to High-Impact Applications
截稿日期: 2026-10-31

With the proliferation of data from diverse sources and the rapid advancements in computational capabilities, the field of multi-view learning has become increasingly vital. This domain focuses on developing sophisticated models to harness the rich, complementary information present across different data modalities or "views", aiming to achieve more robust and comprehensive data representations. In recent years, deep learning and Large Language Model (LLM) techniques have unlocked new potential, enabling the capture of complex, non-linear relationships and leading to breakthroughs in a wide array of applications. This special issue invites cutting-edge research and innovative contributions that advance the theoretical foundations, methodological developments, and real-world applications of multi-view learning. We welcome submissions that explore new frontiers, including but not limited to, advanced representation alignment, novel deep architectures, and scalable algorithms on multi-view learning area. We particularly encourage papers that demonstrate the practical utility of these models in high-impact areas such as bioinformatics, intelligent systems, medical diagnostics, and beyond. This special issue will focus on: Heterogeneous Data Fusion: Effectively fusing and aligning information from different views. Low quality multi-view learning: incomplete multi-view learning, partially aligned multi-view learning, and noisy multi-view learning. Cross-modal Retrieval: Enabling precise information retrieval across views. Scalable Multi-view Algorithm. Explainability and Robustness. Multimodal Generative Models: Using models for cross-view content generation. Multimodal Representation Learning: Learning unified or correlated representations for different views. Deep Multimodal Architectures: Applying deep learning to capture complex, non-linear relationships in multi-view data. Few-shot and Zero-shot Learning: Generalizing to new tasks with limited or no labeled multi-view data. Multimodal for Applications: Applying multimodal learning to tasks like disease diagnosis, industrial vision and others. Evaluation Metrics and Benchmarks: Establishing new standards for evaluating multimodal/multi-view learning performance. Guest editors: Dr. Jie Wen Harbin Institute of Technology, Shenzhen, China Dr. Shizhe Hu Zhengzhou University, Zhengzhou, China Dr. Lusi Li Old Dominion University, Norfolk, United States
最后更新 Dou Sun 在 2025-12-18
Special Issue on Adaptive and Scalable Vision Models in Dynamic and Resource-Constrained Environments
截稿日期: 2026-11-30

In today’s rapidly evolving world, vision models are playing an increasingly crucial role in a variety of applications, including robotics, autonomous driving, healthcare, industrial automation, and environmental monitoring. However, these models often face challenges in dynamic, complex, and resource-constrained environments where data is noisy, incomplete, or continuously evolving. Traditional deep learning models typically rely on vast amounts of labeled data and significant computational resources, making them less feasible in real-world settings. As the demand for more adaptable and scalable systems grows, the integration of advanced learning techniques such as few-shot learning (FSL), zero-shot learning (ZSL), incremental learning (IL), and continual learning (CL) has emerged as a promising approach to overcome these limitations. These techniques allow vision systems to operate effectively even with limited data and evolving tasks, making them ideal for dynamic and resource-constrained environments. This special issue aims to gather cutting-edge research on adaptive and scalable visual models that can work efficiently in dynamic and resource limited environments. We welcome contributions from theoretical and applied research to address major challenges such as data scarcity, computational limitations, and task development. The paper should explore new algorithms, models, and frameworks to push the limits of visual system possibilities while ensuring practical applications in real-world scenarios. Few-Shot and Zero-Shot Learning for Vision Systems Incremental and Continual Learning for Vision Efficient Vision Systems for Real-World Applications Cross-Domain and Cross-Modal Vision Systems Adaptability in Dynamic and Unstructured Environments Few-Shot and Zero-Shot Learning in Robotics and Autonomous Systems Long-Term and Lifelong Vision Learning Scalable Vision Architectures for Large-Scale Data Vision-Based Healthcare Systems Robust Vision Systems in Adverse Conditions 3D Vision Systems and Applications Transfer Learning and Knowledge Sharing Across Tasks and Domains Guest editors: Prof. Xin Ning Affiliation: Institute of Semiconductors, Chinese Academy of Sciences, Beijing, China Prof. Prayag Tiwari Affiliation: Halmstad University, Halmstad, Sweden Prof. Qiuhong Ke Affiliation: Monash University, Melbourne, Australia Prof. Sahraoui Dhelim Affiliation: Dublin City University, Dublin, Ireland Prof. Wenbin Zhang Affiliation: Florida International University, Miami, USA Manuscript submission information: Open for Submission: from 01-Jan-2026 to 30-Nov-2026 Submission Site: Editorial Manager® Article Type Name: "VSI: PR_Adaptive and Scalable Vision Models" - please select this item when you submit manuscripts online All manuscripts will be peer-reviewed. Submissions will be evaluated based on originality, significance, technical quality, and clarity. Once accepted, articles will be posted online immediately and published in a journal regular issue within weeks. Articles will also be simultaneously collected in the online special issue. For any inquiries about the appropriateness of contribution topics, welcome to contact Leading Guest Editor Prof. Xin Ning. Guide for Authors will be helpful for your future contributions, read more: Guide for authors - Pattern Recognition - ISSN 0031-3203 | ScienceDirect.com by Elsevier For more information about our Journal, please visit our ScienceDirect Page: Pattern Recognition | Journal | ScienceDirect.com by Elsevier Keywords: Vision Models; Resource-constrained Environments; Few-Shot Learning; Incremental and Continual Learning; 3D Vision Systems
最后更新 Dou Sun 在 2025-12-18
相关会议
CCFCOREQUALIS简称全称截稿日期通知日期会议日期
cb2ICTInternational Conference on Telecommunications2025-03-142025-03-192025-04-28
cb4LATINCOMIEEE Latin-American Conference on Communications2024-08-052024-09-062024-11-06
caa2ICDARInternational Conference on Document Analysis and Recognition2025-03-072025-05-242025-09-17
biTAPInternational Conference on Internet Technology and Applications2013-06-192013-06-252013-08-14
cca1FGInternational Conference on Automatic Face and Gesture Recognition2026-01-092026-04-022026-05-25
cPRCVChinese Conference on Pattern Recognition and Computer Vision2025-06-032025-08-102025-10-18
ab1SSPRInternational Workshop on Structural and Syntactic Pattern Recognition2012-08-152012-11-07
cICIARInternational Conference on Image Analysis and Recognition2020-02-102020-03-162020-06-24
aa*a1CVPRIEEE Conference on Computer Vision and Pattern Recognition2025-11-062026-02-202026-06-03
cba1ICPRInternational Conference on Pattern Recognition2024-03-202024-08-052024-12-01