Journal Information
Pattern Recognition Letters (PRL)
Impact Factor:

Call For Papers
Pattern Recognition Letters aims at rapid publication of concise articles of a broad interest in pattern recognition.
Subject areas include all the current fields of interest represented by the Technical Committees of the International Association of Pattern Recognition, and other developing themes involving learning and recognition. Examples include:

• Statistical, structural, syntactic pattern recognition;
• Neural networks, machine learning, data mining;
• Discrete geometry, algebraic, graph-based techniques for pattern recognition;
• Signal analysis, image coding and processing, shape and texture analysis;
• Computer vision, robotics, remote sensing;
• Document processing, text and graphics recognition, digital libraries;
• Speech recognition, music analysis, multimedia systems;
• Natural language analysis, information retrieval;
• Biometrics, biomedical pattern analysis and information systems;
• Scientific, engineering, social and economical applications of pattern recognition;
• Special hardware architectures, software packages for pattern recognition.

We invite contributions as research reports or commentaries.

Research reports should be concise summaries of methodological inventions and findings, with strong potential of wide applications.
Alternatively, they can describe significant and novel applications of an established technique that are of high reference value to the same application area and other similar areas.

Commentaries can be lecture notes, subject reviews, reports on a conference, or debates on critical issues that are of wide interests.

To serve the interests of a diverse readership, the introduction should provide a concise summary of the background of the work in an accepted terminology in pattern recognition, state the unique contributions, and discuss broader impacts of the work outside the immediate subject area. All contributions are reviewed on the basis of scientific merits and breadth of potential interests.
Last updated by Dou Sun in 2019-11-24
Special Issues
Special Issue on Cross-Media Learning for Visual Question Answering (VQA)
Submission Date: 2020-06-05

Visual Question Answering (VQA) is a recent hot topic which involves multimedia analysis, computer vision (CV), natural language processing (NLP), and even a broad perspective of artificial intelligence, which has attracted a large amount of interest from the deep learning, CV, and NLP communities. The definition of this task is shown as follows: a VQA system takes a picture and a free, open-ended question in the form of natural language about the picture as input and takes the generation of a piece of answer in the form of natural language as the output. It is required that pictures and problems should be taken as input of a VQA system, and a piece of human language is required to be generated as output by integrating information of these two parts. For a specific picture, if we want that the machine can answer a specific question about the picture in natural language, we need to enable the machine to have certain understanding of the content of the picture, and the meaning and intention of the question, as well as relevant knowledge. VQA relates to AI technologies in multiple aspects: fine-grained recognition, object recognition, behavior recognition, and understanding of the text contained in the question (NLP). Because VQA is closely related to the content both in CV and NLP, a natural QA solution is integrating CNN with RNN, which are successfully used in CV and NLP, to construct a composite model. To sum up, VQA is a learning task linked to CV and NLP. The task of VQA is rather challenging because it requires to comprehend textual questions, and analyze visual questions and image elements, as well as reasoning about these forms. Moreover, sometimes external or commonsense knowledge is required as the basis. Although some achievements have been made in VQA study currently, the overall accuracy rate is not high as far as the effect achieved by the current model is concerned. As the present VQA model is relatively simple in structure, single in the content and form of the answer, the correct answer is not so easy to obtain for the slightly complex questions which requires more prior knowledge for simple reasoning. Therefore, this Special Section in Journal of Visual Communication and Image Representation aims to solicit original technical papers with novel contributions on the convergence of CV, NLP and Deep Leaning, as well as theoretical contributions that are relevant to the connection between natural language and CV. Topics The topics of interest include, but are not limited to:  Deep learning methodology and its applications on VQA, e.g. human computer interaction, intelligent cross-media query and etc.  Image captioning indexing and retrieval  Deep Learning for big data discovery  Visual Relationship in VQA  Question Answering in Images  Grounding Language and VQA  Image target location using VQA  Captioning Events in Videos  Attention mechanism in VQA system  Exploring novel models and datasets for VQA
Last updated by Dou Sun in 2019-07-06
Special Issue on Advances in Human Action, Activity and Gesture Recognition (AHAAGR)
Submission Date: 2020-06-30

The goal of this Special Issue onAdvances on Human Action, Activity and Gesture Recognition (AHAAGR)is to gather the most contemporary achievements and breakthroughs in the fields of human action and activity recognition under one cover in order to help the research communities to set future goals in these areas by evaluating the current states and trends. Especially, due to the advancement of computational power and camera/sensor technology, deep learning, there has been a paradigm shift in video-based or sensor-based research in the last few years. Hence, it is of utmost importance to compile the accomplishments and reflect upon them to reach further. This issue is soliciting original & technically-sound research articles with theoretical & practical contributions from the computer vision, machine learning, imaging, robotics, & AI communities. Topics of interest include (but are not limited to): Human action/activities/gesture recognition from video or other relevant sensor data Large datasets on action/activity/gesture recognition Multi-sensor action/activity/gesture recognition Action/activity/gesture recognition from skeleton data, depth map. Deep learning and action recognition Action localization and detection; Action sequence generation/completion Anomaly detection from surveillance videos; Action recognition in Robotics Hand gesture recognition for virtual reality and other applications Crowd behavior analysis and prediction from video sequences Human behavior analysis and recognizing social interactions Behavior recognition based on bodily & facial expressions Applications and future trends of action/activity/gesture recognition
Last updated by Dou Sun in 2019-07-06
Special Issue on Implicit BIOmetric Authentication and Monitoring through Internet of Things
Submission Date: 2020-09-30

According to reliable forecasts, the expected number of connected IoT devices could exceed 25 billions by 2020. An important fraction of this number includes last generations mobile and wearable devices featuring an arsenal of advanced sensors (high speed/depth/multi-focal cameras, finger imaging, accelerometers, gyros, etc.), up to 5G communication capability and growing computing power. These collection of features makes them particularly suited to capture both static and dynamic biometrics, to continuosly monitor health signals and/or to provide information about the operating context. In summary, these capabilities will enable a new generation of Internet of Biometric Things (IoBT) approaches which will greatly extend the range and the target of "mainstream" biometric applications. This Special Issue aims at gathering the latest research findings and applications for transparent acquisition and processing of biometrics and health signals in the context of ubiquitous IoBT-based user authentication and monitoring, outlining new application scenarios for mobile biometrics. Topics include, but are not limited to: IoBT enabled biometrics Ubiquitous user authentication/recognition Ubiquitous biometric monitoring Implicit IoBT-enabled authentication/recognition Implicit IoBT-enabled activity recognition Implicit IoBT-enabled context detection Dynamic biometrics capture and processing Implicit psychophysical assessment Deep Learning for IoBT applications Health signals analysis via mobile devices Elders monitoring through IoBT devices and approaches Privacy and IoBT
Last updated by Dou Sun in 2019-10-20
Special Issue on Biometric Presentation Attacks: handcrafted features versus deep learning approaches (BioPAth)
Submission Date: 2020-10-31

In the last decade, biometric technology has been rapidly adopted in a wide range of security applications. This approach to automatic verification of personal identity begins to play a fundamental role in personal, national and international security. Despite this, there are well-founded fears that the technology is vulnerable to spoofing, also known as a presentation attack. For example, fingerprint verification systems can be violated by using fingerprints made of a synthetic material, such as silicone, in which the ridges and valleys of the fingerprints of another individual who has access to the system are imprinted. Iris and face recognition systems can be violated using images or video sequences of the eyes or face of a registered user. Speech recognition systems can be violated through the use of repeated, synthesized or converted speech. In recent years there has been a considerable effort to develop spoof countermeasures or presentation attack detection (PAD) technology to protect biometric systems from fraud. A PAD method can improve the security level of biometric recognition systems. Most of the PAD methods proposed are based on the use of handcrafted features, designed by an in-depth knowledge of designers. An alternative approach based on deep learning approach is also possible. This special issue is expected to present original papers describing the very latest developments in spoofing and countermeasures. What are the approaches to the state of the art? What are the advantages and what are the limits of handcrafted features and deep learning approaches? Is an auto-adaptive approach possible? How much do these systems integrate with the corresponding match systems? The focus of the special issue includes, but is not limited to the following topics related to spoofing and countermeasures: Adversarial biometric recognition; Spoof detection based on deep learning; Spoof detection based on handcrafted features; Attack transferability in biometric applications; Design of robust forgery detectors; Vulnerability analysis of previously unconsidered spoofing methods; Advanced methods for standalone countermeasures; New evaluation protocols, datasets, and performance metrics for the assessment of spoofing and countermeasures;
Last updated by Dou Sun in 2020-01-04
Special Issue on Deep Learning for Precise and Efficient Object Detection
Submission Date: 2020-12-31

Object detection is one of the most challenging and important tasks of computer vision and is widely used in applications such as autonomous vehicle, biometrics, video surveillance, and human-machine interactions. In the past five years, significant success has been achieved with the development of deep learning, especially deep convolutional neural networks. Typical categories of advanced object detection methods are one-stage, two-stage, and anchor-free methods. Nevertheless, the performance in accuracy and efficiency is far from satisfying. On the one hand, the average precision of state-of-the-art object detection methods is very low (e.g., merely about 40% on the COCO dataset). The performance is even worse for small and occluded objects. On the another hand, to obtain precision the detection speed is very low. It is challenging to get a satisfying trade-off between the detection precision and speed. Therefore, much efforts have to be engaged to remarkably improve the performance of object detection in both precision and efficiency. This special issue will publish papers presenting state-of-the-art methods in dealing with the challenging problems of object detection within the framework of deep learning. We invite authors to submit manuscripts that are highly related to the topics of this special issue and which have not been published before. The topics of interest include, but are not limited to: Anchor and Anchor-free object detection Detecting small or occluded objects Context and attention mechanism for object detection Fast object detection algorithms New backbone for object detection Architecture search for object detection 3D object detection Object detection in challenging conditions Handling scale problems in object detection Improving localization accuracy Fusion of point cloud and images for object detection Relationship between object detection and other computer vision tasks. Large-scale datasets for object detection
Last updated by Dou Sun in 2019-11-24
Special Issue on Multi-view Representation Learning and Multi-modal Information Representation
Submission Date: 2021-03-31

During the recent decades, with the rapid development of information and computer technology, many fields have transformed data-poor areas to increasingly data-rich fields of research. Meanwhile, huge amount of data are often collected and extracted from multiple information sources and observed from various views. For example, a person can be identified by fingerprint, face, signature or iris with information obtained from multiple sources; an object can also be represented as multi-views, which can be seen as different feature subsets of the image; the news can be reported by a combination of texts, images, and videos on the internet; More and more information is represented by multi-view or multi modal data. To overcome the limitations of a single-view or single-modal data representation, different views and modals can be leveraged to provide complementary information to each other, and comprehensively characterize the data. Thus, multi-view representation learning and multi-modal information representation have raised widespread concerns in diverse applications. The main challenge is how to effectively explore the consistency and complementary properties from different views and modals for improving the multi-view learning performance. The goal of this special issue in Pattern Recognition Letters is to collect high-quality articles focusing on developments, trends, and research solutions of multi-view representation learning and multi-modal information representation in range of applications. The topics of interest include, but are not limited to: Ø Feature learning techniques (feature selection/reduction/fusion, subspace learning, sparse coding, etc.) for multi-view data. Ø Multi-view data based real-world applications, e.g., object detection/tracking, image segmentation, video understanding/categorization, scene understanding, action recognition, classification/clustering tasks, etc. Ø Advanced deep Learning techniques for multi-view data learning and understanding. Ø Structured/semi-structured multi-view data learning (e.g., one-shot learning, zero-shot learning, supervised learning, and semi-/unsupervised learning). Ø Multi-view missing data completion. Ø Multi-modal information retrieval and classification. Ø Large-scale multi-view data learning and understanding. Ø Multi-task/Transfer learning for multi-view data understanding. Ø Multi-modal data based medical applications (diagnosis, reconstruction, segmentation, registration, etc.) Ø Multi-modal data based medical image analysis with advanced deep learning techniques. Ø Multi-modal data based remote sensing image analysis. Ø Survey papers with regards to topics of multi-view representation learning and understanding. Ø New benchmark datasets collection for multi-view data learning.
Last updated by Dou Sun in 2020-03-27
Related Journals
CCFFull NameImpact FactorPublisherISSN
bPattern Recognition5.898Elsevier0031-3203
cInternational Journal of Uncertainty, Fuzziness and Knowledge-Based Systems World Scientific0218-4885
cIntelligent Data Analysis0.691IOS Press1088-467X
Cognitive Psychology3.746Elsevier0010-0285
Network Security Elsevier1353-4858
Telecommunication Systems1.027Springer1018-4864
Journal of Artificial Societies and Social SimulationUniversity of Surrey1460-7425
Journal of King Saud University - Engineering SciencesKing Saud University1018-3639
International Journal of Reliability, Quality and Safety EngineeringWorld Scientific0218-5393
cIEEE Geoscience and Remote Sensing Letters1.56IEEE1545-598X
Related Conferences
CCFCOREQUALISShortFull NameSubmissionNotificationConference
cab1ICONIPInternational Conference on Neural Information Processing2020-06-012020-08-152020-11-18
SIGLInternational Conference on Signal and Image Processing2019-11-232019-12-262020-01-25
ACCAmerican Control Conference2019-09-232020-01-312020-07-01
FPSInternational Symposium on Foundations & Practice of Security2018-09-152018-10-152018-11-13
BioMEDInternational Conference on Biomedical Engineering2012-10-292012-11-152013-02-13
ICDMMLInternational Conference on Data Mining and Machine Learning2018-12-302019-01-152019-04-29
ICBDSDEInternational Conference on Big Data and Smart Digital Environment2018-09-302018-10-202018-11-29
CCCIOTInternational Conference on Cloud Computing and IOT2020-03-282020-04-102020-04-25
ICAECTIEEE International Conference on Advances in Electrical, Computing, Communications and Sustainable Technologies2020-06-202020-08-012020-10-22
FCST'IEEE International Symposium on Future Cyber Security Technologies2018-08-072018-08-252018-10-15