Journal Information
Pattern Recognition Letters (PRL)
Impact Factor:

Call For Papers
Pattern Recognition Letters aims at rapid publication of concise articles of a broad interest in pattern recognition.
Subject areas include all the current fields of interest represented by the Technical Committees of the International Association of Pattern Recognition, and other developing themes involving learning and recognition. Examples include:

• Statistical, structural, syntactic pattern recognition;
• Neural networks, machine learning, data mining;
• Discrete geometry, algebraic, graph-based techniques for pattern recognition;
• Signal analysis, image coding and processing, shape and texture analysis;
• Computer vision, robotics, remote sensing;
• Document processing, text and graphics recognition, digital libraries;
• Speech recognition, music analysis, multimedia systems;
• Natural language analysis, information retrieval;
• Biometrics, biomedical pattern analysis and information systems;
• Scientific, engineering, social and economical applications of pattern recognition;
• Special hardware architectures, software packages for pattern recognition.

We invite contributions as research reports or commentaries.

Research reports should be concise summaries of methodological inventions and findings, with strong potential of wide applications.
Alternatively, they can describe significant and novel applications of an established technique that are of high reference value to the same application area and other similar areas.

Commentaries can be lecture notes, subject reviews, reports on a conference, or debates on critical issues that are of wide interests.

To serve the interests of a diverse readership, the introduction should provide a concise summary of the background of the work in an accepted terminology in pattern recognition, state the unique contributions, and discuss broader impacts of the work outside the immediate subject area. All contributions are reviewed on the basis of scientific merits and breadth of potential interests.
Last updated by Dou Sun in 2019-11-24
Special Issues
Special Issue on Pattern Recognition-driven User Experiences (PRUE)
Submission Date: 2021-02-10

Games, search engines, e-commerce, infotainment, and many other services allow users a high degree of personalization; this evolution creates new needs, changes habits, and raises expectations. At the same time, the availability of new instruments is noticeably changing the kind of experience the users expect. The strong immersivity and high degree of realism of VR, MR, and AR are freeing the UX from the classic screen borders, with voice and gestures adding naturalness to the experience and keeping high the sense of users’ involvement and immersion. IoT ecosystems, smartwatches, digital assistants, and other devices, are instruments that may provide precious hints about users and usage contexts, if supported by the application of Pattern Recognition theories and techniques. Aiming at improving efficiency, intelligence, and delight perceived by users, Pattern Recognition-driven User Experience leverages intelligent computing to dynamically adapt appearance and behaviour with automatic decision-making. Pattern Recognition offers the instruments to detect and “understand” context, user’s signals, intents, emotions, and provides a set of disruptive methodologies for an effective personalization of the experience. The purpose of this Special Issue is to investigate how concepts and theories related to Pattern Recognition can be applied to improve or create a fully novel User Experience, new opportunities, and open problems. The Special Issue aims at collecting and presenting new advances in the application of Pattern Recognition to (but not limited): Emotion recognition and adaptive applications; Speech recognition; Pattern recognition for virtual, augmented and mixed reality; Applications to mobile and embedded systems; Natural language applications; Design and evaluation of innovative interactive system; Ambient intelligence; Personalization of user experience.
Last updated by Dou Sun in 2020-06-25
Submission Date: 2021-02-20

The integration of Machine Learning Intelligence and computer vision technologies has become a topic of increasing interest for both researchers and developers from academic fields and industries worldwide. Pattern recognition is defined as the classification of data based on the knowledge gained on statistical information extracted in the form of pattern. This special issue focus on pattern recognition and machine learning in solar. It is predictable that Machine Learning Intelligence will be the main approach of the next generation of computer vision research in Power and Energy System Applications. The explosive number of Machine Learning Intelligence algorithms and increasing computational power of computers has significantly extended the number of potential applications for computer vision and Energy Systems. It has also brought new challenges to the vision community. Authors are requested to submit unresolved and original unpublished research manuscripts focused on the latest findings in Machine Learning Intelligence and computer vision in Power and Energy. The topics include the following but are not limited to Content based Image retrieval in Solar Binocular-light field imaging system analytical hierarchical process multi-criteria decision-making system Machine learning adoption in Solar energy applications deep network architecture for multi-modal image super-resolution solar energy using layer-wise optimization algorithm Solution to Single image Super resolution Smart Vision-based Robotic Manipulation
Last updated by Dou Sun in 2020-06-07
Special Issue on Multi-view Representation Learning and Multi-modal Information Representation
Submission Date: 2021-03-31

During the recent decades, with the rapid development of information and computer technology, many fields have transformed data-poor areas to increasingly data-rich fields of research. Meanwhile, huge amount of data are often collected and extracted from multiple information sources and observed from various views. For example, a person can be identified by fingerprint, face, signature or iris with information obtained from multiple sources; an object can also be represented as multi-views, which can be seen as different feature subsets of the image; the news can be reported by a combination of texts, images, and videos on the internet; More and more information is represented by multi-view or multi modal data. To overcome the limitations of a single-view or single-modal data representation, different views and modals can be leveraged to provide complementary information to each other, and comprehensively characterize the data. Thus, multi-view representation learning and multi-modal information representation have raised widespread concerns in diverse applications. The main challenge is how to effectively explore the consistency and complementary properties from different views and modals for improving the multi-view learning performance. The goal of this special issue in Pattern Recognition Letters is to collect high-quality articles focusing on developments, trends, and research solutions of multi-view representation learning and multi-modal information representation in range of applications. The topics of interest include, but are not limited to: Ø Feature learning techniques (feature selection/reduction/fusion, subspace learning, sparse coding, etc.) for multi-view data. Ø Multi-view data based real-world applications, e.g., object detection/tracking, image segmentation, video understanding/categorization, scene understanding, action recognition, classification/clustering tasks, etc. Ø Advanced deep Learning techniques for multi-view data learning and understanding. Ø Structured/semi-structured multi-view data learning (e.g., one-shot learning, zero-shot learning, supervised learning, and semi-/unsupervised learning). Ø Multi-view missing data completion. Ø Multi-modal information retrieval and classification. Ø Large-scale multi-view data learning and understanding. Ø Multi-task/Transfer learning for multi-view data understanding. Ø Multi-modal data based medical applications (diagnosis, reconstruction, segmentation, registration, etc.) Ø Multi-modal data based medical image analysis with advanced deep learning techniques. Ø Multi-modal data based remote sensing image analysis. Ø Survey papers with regards to topics of multi-view representation learning and understanding. Ø New benchmark datasets collection for multi-view data learning.
Last updated by Dou Sun in 2020-03-27
Submission Date: 2021-04-01

COVID-19 disease, caused by the SARS- virus, was detected in December 2019 and declared a global pandemic on 11 March 2020 by the WHO. Artificial Intelligence (AI) is a highly effective method for fighting the pandemic COVID-19. AI can be described as Machine Learning (ML), Natural Language Processing (NLP), and Computer Vision applications for present purposes to teach computers to use large data-based models for pattern recognition, description, and prediction. Such functions can help identify (diagnosing), forecasting, and describing (treating) COVID-19 infections and aiding in controlling socioeconomic impacts. After the pandemic epidemic, for these reasons, there has been a rush to use and test AI and other data analytics devices. Such tasks can be useful for identifying (diagnosing), forecasting, and describing (treating) COVID-19 infections and helping to control socioeconomic impacts. Since the onset of the pandemic, there has been a rush to use and test AI and other data mining techniques for these purposes. The risk of the epidemic in terms of life and economic loss would be terrible; much confusion engulfed predictions of how bad and how effective non-pharmaceutical and pharmaceutical solutions would be. A worthy goal is to strengthen AI, one of the most popular data analytics tools that have been developed in the past decade or reduce these uncertainties. Data scientists have been willing to take up the opportunity. In AI, machine learning and its subset(Deep Learning) methods are employed in various applications to solve multiple problems that occur due to uncertainty. But these problems were solved with the help of data collected from the history of occurrences of the event. Most of the machine learning and deep learning algorithms are trained to address the supervised learning problem, where the algorithms know the prediction requirement. On the other hand, the potential measure of the unsupervised learning method is quite high. The ability to explore new possibilities of the outcome is high. In general, supervised learning methods are bounded with biases, in which the set of rules are determined with the DOs and DONTs, which prohibit the thinking of other possibilities. Also, a high effort, manual work, and time are required to label the data for the supervised learning process, in case the labeling is not available. The primary objective of this special issue is to enhance the ability of unsupervised learning into the deep learning methodologies to find a solution to the COVID-19. To improve the behavior and nature of the deep learning method with the quality of the clustering algorithm. So, the unsupervised learning methodology can be implemented in the deep learning algorithms for efficient data classification. The focus of this special issue is to provide a platform and opportunity for the researchers to find the solution for the current pandemic and future hazards like this that humanity has to face using the AI that involves self-learning methodologies. TOPICS MAY INCLUDE, BUT NOT LIMITED TO THE FOLLOWING: · intelligent signal computing based on Deep Embedded clustering · An evolutionary approach to Process the signals and its application · Architectures for Real-time sensing and intelligent processing · Auto-Encoders, Restricted Boltzmann Machines for signal classification · Real-time Signal processing based on DEC · Parallel and distributed algorithm design and implementation in signal sensing · Analytics for multi-dimension data · Intelligent computing on signal for data analysis · Real-time remote sensing signals, such as hyperspectral signal classification, content-based signal indexing, and retrieval, monitoring of natural. · the selection of suitable unsupervised learning methodologies. · the selection of suitable and efficient deep learning methodology. · the selection of diverse datasets and problems to test and validate the research outcomes. · the exploration of the optimal deep learning methodology for data classifications.
Last updated by Dou Sun in 2020-07-30
Submission Date: 2021-05-20

The rapid increase in population has predominantly increased the demand and usage of the motorized vehicles in all areas. This increase in motor vehicular usage has substantially increased the rate of road accidents in the recent decade. Furthermore, injuries, disabilities, and death due to fatal road accidents have been increasing every year despite the safety measures introduced for the public and private transportation system. Congestion of vehicles, a driver under alcohol or drug influence, distracted driving, street racing, faulty design of cars or traffic lights, tailgating, running red lights & stop signs, improper turns and driving in the wrong direction are some of the real causes of accidents across the globe. There are many advanced surveillance systems implemented for road safety, but the prevention of accidents are still being an effective problem. The existing sophisticated vehicles monitored and traffic surveillance system should be used to prevent accidents from occurring. However real-time observations are difficult with an enormous amount of surveillance data running continuously. With the emerging trends in the field of information and computer science, the use of innovative technologies in real-time can be helpful for accident prevention and detection. Computer vision is the technology that is designed to imitate how the human visual system works. The digital image data from the multiple surveillance systems are acquired in real-time and the data is analyzed and if there are any incidents such as speeding, reckless driving, accidents, etc. it is identified and reported by the system concurrently. Image classification, object detection, object tracking, semantic segmentation, and instance segmentation are some of the computer vision-based techniques with advanced deep learning approaches which can be used in the real-time accident detection and prevention processes. Similarly, using neural networks many anomalies can be detected in the movement of vehicles using historical data which can be also used in the prevention of accidents. The recent developments in the use of deep learning approaches in visual recognition can be seen as a significant contribution to advanced computer vision research. Moreover, the assistance of computer vision in the surveillance of traffic for accident prevention and detection in real-time would be more significant. The special issue on “Real-time computer vision for accident prevention and detection” The list of topics that are relevant includes, but it is not limited to, the following: Theoretical analysis of Computer Vision-based Visual recognition for Fatal Accidents Unsupervised, Semi-Supervised and Self-Supervised Feature Learning of Transportation Accidents A Study on Real-time Applications of Computer Vision and Image Analysis in Traffic Congestion Deep Vision-based Learning for Accident and Traffic Collision Reconstruction Future of Computer Vision in Road Safety and Intelligent Traffic Sensors and Early Vision for Post-Accident and Injury Phases Computer Vision for Fatigue Detection and Management Technologies Applications of Neural Networks in Transportation Strategy Planning and Instinctive Decision Making Advanced Visual Learning Methods for Risk-based Accident Prevention Computer Vision Algorithms and methodologies for Pre-Crash Analysis
Last updated by Dou Sun in 2020-07-30
Special Issue on Application of Pattern Recognition in Digital world: Security, Privacy and Reliability (APRDW)
Submission Date: 2021-06-20

Digital technology plays a vital role in humans’ day-to-day activity. It has made the system simple and more powerful and plays its major role in social networks, communication, and digital transaction, etc. The rapid development in digital technology also has downsides in the integrity of data, data privacy, and confidentiality. There has been a need for security, privacy, and reliability in digital technology. Pattern recognition is a computerized recognition that regulates the data in digital technology and plays a vital role in the digital world. A pattern can either be seen physically or it can be observed mathematically by applying algorithms. The pattern recognition techniques have been categorized as statistical techniques, structural techniques, template matching, neural network approach, fuzzy model, and hybrid models. A common platform is always in need to share the views of different researchers relating to the complicated facets of pattern recognition in the areas of security, privacy, and reliability in digital technology. This special issue explores novel concepts and practices with a long-term goal of fully-automated lifestyle fostered by the technological advances of pattern recognition in a wide spectrum of applications. We invite authors from both industry and academia to submit original research and review articles that cover the security, privacy, and reliability in digital technology using the pattern recognition techniques. Models, algorithms, and designs for reliability in digital media Network-assisted rate adaptation for reliability in digital media Reliability based privacy in digital media Reliability, security in digital transaction Malware and virus detection for reliable digital media analytics Development of software tools and technique for integrity of data, data privacy, and confidentiality
Last updated by Dou Sun in 2020-08-11
Special Issue on Few-shot Learning for Human-machine Interactions (FSL-HMI)
Submission Date: 2021-07-20

The widespread use of Web technologies, mobile technologies, and cloud computing have paved a new surge of ubiquitous data available for business, human, and societal research. Nowadays, people interact with the world via various Information and Communications Technology (ICT) channels, generating a variety of data that contain valuable insights into business opportunities, personal decisions, and public policies. Machine learning has become the common task of applications in various application scenarios, e.g., e-commerce, health, transport, security and forensics, sustainable resource management, emergency and crisis management to support intelligent analytics, predictions, and decision-making. It has proven highly successful in data-intensive applications and revolutionized human-machine interactions in many ways in modern society. Essential to machine learning is to deal with a small dataset or few-shot learning, which aims to develop learning models that can generalize rapidly generalize from a few examples. Though challenging, few-shot learning has gained increasing popularity since inception and has mostly focused on the studies in general machine learning contexts. Meanwhile, traditional human-machine interactions research has primarily focused on interaction design and local adaptation for user-friendliness, ergonomics, or efficiency. The emerging topics such as brain-computer interface, multimodal user interfaces, and mobile personal assistants as new means of human-machine interactions are still in their infancies. Few-shot learning is especially important for such new types of human-machine interactions due to the difficulty of acquiring examples with supervised information due to privacy, safety, expense, or ethical concerns. Although the related research is relatively new, it promises a fertile ground for research and innovation. This special issue aims at gathering the recent advances and novel contributions from academic researchers and industry practitioners in the vibrant topic of few-shot learning to achieve the full potential of human-machine interaction applications. It calls for innovative methodological, algorithmic, and computational methods that incorporate the most recent advances in data analytics, artificial intelligence, and interaction research to solve the theoretical and practical problems. It also requires reexamining the existing architectures, models, and techniques in machine learning and deep neural networks to address the challenges to advance state-of-the-art knowledge in this area. Topics of Interest include but not limited to: Novel few-shot, one-shot, or zero-shot learning models and algorithms for sense-making of humans, systems, and their interactions Conceptual frameworks, computational design for few-shot learning or human-centric computing Methods that improve the learnability, efficiency, or usability of systems that interact with humans Techniques to address small datasets, e.g., data imputation/augmentation, generative models, reinforcement learning, active learning. Novel recommender systems in HCI related aspects Trust, security/privacy, and performance evaluations for few-shot learning Interface or interaction designs based on few shot examples to enable humans to interact with computers in novel ways Other technologies and applications that advocates a better understanding of or exploiting values from human-machine interactions
Last updated by Dou Sun in 2020-11-03
Special Issue on Mobile and Wearable Biometrics (VSI:MWB)
Submission Date: 2021-09-20

Mobile devices such as smartphones and tablets are nowadays daily employed by more than 3 billion people, with an expected further worldwide penetration up to 5 billion users by 2025. Among the reasons for such astonishing growth, from the early years of mobile communications to the present day, there is the fact that modern mobile devices offer the possibility to perform many tasks and access several services, such as taking pictures or perform on-line payments, with an extreme ease of use. As a matter of fact, the share of internet users making mobile online payments is above 30% in most regions of the world. As the next step in terms of technological revolution, wearable devices such as smart glasses, chestbands, and wristbands, are also rapidly becoming widespread. Thanks to their ability in capturing physiological signals like those related to the heart rate, a vast number of applications is being developed for wearable platforms, ranging from activity tracker and healthcare to social sharing in the context of the Internet of Things. It has yet to be observed that most of the services which can be performed through mobile and wearable devices are typically accessed and used by providing sensitive and valuable data, such as passwords, credit card numbers, and so forth. Furthermore, the information commonly captured by the sensors with which these devices are equipped, and stored within them, is highly personal, with consequent possible security and privacy issues in case unauthorized subjects try to access such content. It is therefore of paramount importance to design effective and secure mechanisms to access these devices. In this regard, resorting to biometric recognition systems seems a natural choice. Mobile and wearable devices are in fact commonly equipped with several sensors which could be exploited to acquire discriminative traits, thus allowing to recognize the authorized users. Furthermore, the possibility of performing biometric recognition within mobile and wearable devices may come in handy to use them as authenticating tokens, providing the means to perform decentralized access control, thus exploiting mobile and wearable technology as authenticating means by combining their capabilities with biometric solutions. Such approach would for instance allow to design reliable systems performing continuous recognition, monitoring the identity of a subject during a period of indefinite temporal extension, hence providing robustness against session hi-jacking, in which an intruder may seize control of an ongoing session after a successful login of a legitimate user. It is yet worth remarking that the systems to be implemented for such devices should be designed while taking into account the specific peculiarities of the considered scenarios. For instance, with respect to solutions dedicated to desktop systems, where physical characteristics are commonly preferred, approaches based on either behavioural or cognitive traits might be more appropriate when dealing with mobile and wearable devices. The computational complexity of the required processing may also represent a concern for systems with limited resources available. The present special issue therefore seeks for recent and innovative developments in pattern recognition fields with applications to the design of biometric recognition systems for mobile and wearable devices. Topics of interest include, for example, the analysis and processing of the discriminative information (biosignals, images) which can be captured through mobile and wearable devices, the design of hardware architectures or software packages which could be effectively implemented in such environments, the proposal of machine learning approaches requiring limited computational resources, among others. The topics of the Special Issue include, but are not limited to: Mobile biometrics in the wild; Continuous biometric recognition using wearable devices; Sensors for wearable technology (smartwatches, smart eyewear, smart t-shirt, etc.); Physical and behavioral in the mobile environment; Cognitive biometrics for wearable devices; Age and aging effects in mobile biometrics; Machine learning with limited computational resources; Biometric template protection: challenges and solutions in the mobile environment; Usability, interfaces, and human factors; Hardware architectures and software for biometric recognition on mobile and wearable devices; Affective computing in biometric recognition.
Last updated by Dou Sun in 2020-11-03
Related Conferences
CCFCOREQUALISShortFull NameSubmissionNotificationConference
DMAInternational Conference on Data Mining and Applications2021-01-162021-02-222021-03-27
bb2IJCNLPInternational Joint Conference on Natural Language Processing2021-02-012021-05-052021-08-01
IRCDLItalian Research Conference on Digital Libraries2018-10-052018-10-312019-01-31
ICINInternational ICIN Conference Innovations in Clouds, Internet and Networks2020-11-012020-12-152021-03-01
ICBIPInternational Conference on Biomedical Signal and Image Processing2021-03-302021-04-202021-08-20
cb1BIBEInternational Conference on Bioinformatics & Bioengineering2015-08-302015-09-152015-11-02
ICICT''International Conference on Information and Computer Technologies2019-11-252019-12-152020-03-09
BRAINSConference on Blockchain Research & Applications for Innovative Networks and Services2021-03-312021-05-312021-09-27
VizSecIEEE Symposium on Visualization for Cyber Security2018-07-222018-08-152018-10-22
FDGInternational Conference on the Foundations of Digital Games2020-01-132020-03-092020-09-15