Journal Information
Journal of Parallel and Distributed Computing
Impact Factor:

Call For Papers
The Journal of Parallel and Distributed Computing (JPDC) is directed to researchers, scientists, engineers, educators, managers, programmers, and users of computers who have particular interests in parallel processing and/or distributed computing. The goal of the journal is to publish in a timely manner original research, critical review articles, and relevant survey papers on the theory, design, implementation, evaluation, programming, and applications of parallel and/or distributed computing systems. The journal provides an effective forum for communication among researchers and practitioners from various scientific areas working in a wide variety of problem areas, sharing a fundamental common interest in improving the ability of parallel and distributed computer systems to solve increasing numbers of difficult and complex problems as quickly and as efficiently as possible.

The scope of the journal includes (but is not restricted to) the following topics as they relate to parallel and/or distributed computing:

• Theory of parallel and distributed computing
• Parallel algorithms and their implementation
• Innovative computer architectures
• Shared-memory multiprocessors
• Peer-to-peer systems
• Distributed sensor networks
• Pervasive computing
• Optical computing
• Software tools and environments
• Languages, compilers, and operating systems
• Fault-tolerant computing
• Applications and performance analysis
• Bioinformatics
• Cyber trust and security
• Parallel programming
• Grid computing
Last updated by Xin Yao in 2017-08-21
Special Issues
Special Issue on Cloud-of-Things and Edge Computing: Recent Advances and Future Trends
Submission Date: 2018-01-30

In recent years, Cloud-assisted Internet of Things (Cloud-of-Things or in short CoT) has emerged as a revolutionary paradigm that enables intelligent and self-configuring (smart) IoT devices and sensors to be connected with the cloud through the Internet. This new paradigm enables the resource-constrained IoT devices to get the benefit from Cloud's powerful and scalable high-performance computing and massive storage infrastructure for real-time processing and storing of the IoT data as well as analysis of the processed information under context using inherently complex models. At the same time, cloud can benefit from IoT by allowing its users to build applications that can use and handle these smart IoT objects interconnected and controlled through software services using cloud infrastructure. Thus, the CoT paradigm can stimulate the development of innovative and novel applications to various areas such as smart cities, smart homes, smart grids, smart agriculture, smart transportation, smart healthcare, etc. to improve all aspects of people's life. However, currently the CoT paradigm is facing increasing difficulty to handle the Big data that IoT generates from these application use cases. As billions of previously unconnected devices are now generating more than two exabytes of data each day, it is challenging to ensure low latency and network bandwidth consumption, optimal utilization of computational recourses, scalability and energy efficiency of IoT devices while moving all data to the cloud. Hence, in recent times, this centralized CoT model is undergoing a paradigm shift towards a decentralized model termed as edge computing, that allows data computing, storage and service supply to be moved from Cloud to the local edge devices such as smartphones, smart gateways or routers and local PCs that can offer computing and storage capabilities on a smaller scale in real-time. Edge computing pushes data storage, computing and controls closer to the data source(s) instead of performing in a centralized local server or devices as in the case of Fog computing; therefore, enables each edge device to play its own role of determining what information should be stored or processed locally and what needs to be sent to the cloud for further use. Thus, edge computing complements CoT paradigm in terms of high scalability, low delay, location awareness, and allowing of using local client computing capabilities in real time. While researchers and practitioners have been making progress within the area of edge computing, still there exists several issues that need to be addressed for CoT paradigm. Some of these issues are: novel network architecture and middleware platform for edge and CoT paradigm considering emerging technologies such as 5G wireless networks, semantic computing; edge analytics for Big data; novel security and privacy methods; social intelligence into the edge node to host CoT applications; and context-aware service management on the edge with effective quality of service (QoS) support and other issues.
Last updated by Dou Sun in 2017-06-18
Special Issue on Transmissible Cyber Threats in Distributed Systems
Submission Date: 2018-01-31

Cyberspace faces a range of threats that can rapidly spread in various kinds of distributed systems. Examples include rumors spreading in social media, computer viruses on the Internet, and unexpected failures causing rolling blackouts in Smart Grids. These threats fall into the category of transmissible cyber threats due to their spreading nature in distributed environments. Each year, transmissible cyber threats have caused tremendous financial losses and damages to users in various distributed systems. For example, malware is a type of transmissible cyber threats, which has become one of the most concerning security issues in cyberspace. In another example, a fake Associated Press (AP) news release (i.e. a rumor spreading in Twitter) about a bomb exploding in the White House in 2013 led to a 1% drop in the Standard & Poor’s 500 index, temporarily wiping out US$136 Billion dollars. Recently, researchers have found that the average time required to contain a transmissible cyber-attack was 31 days and it cost US$639,462 in distributed systems such as Internet. The costs were so high and can get higher as transmissible cyber threats grow more sophisticated and take longer time to resolve. In distributed system field, transmissible cyber threats have been extensively studied and received considerable attention in recent years, as witnessed by the number of related publications. Among these works, modelling and restraining are the core research issues. On one hand, the advances of these techniques will benefit the development of defense technologies against transmissible cyber threats in various distributed systems. For example, researchers use propagation models to develop robust techniques to trace threat diffusion sources and disclose potential diffusion paths. Modelling and restraining techniques can also help develop risk assessment methods on exposing compromised Internet users. In addition, modelling will contribute to develop interactive algorithms to capture the threat dynamics and examine the effectiveness of defense strategies. On the other hand, due to their scale, complexity and heterogeneity in distributed systems, a number of technical and practical challenges in this field have not been addressed. For example, current modelling techniques have not been widely accepted as common exercises due to their limitations on presenting key attributes of real-world propagation dynamics. These attributes such as Time Zone and geographical factors greatly influence the modelling accuracy and restraining efficiency. We therefore organize this special issue to reduce the gap between practices and academic research. This special issue will bring together researchers in distributed system field to publish state-of-art research findings in transmissible cyber threats, particularly focusing on propagation modelling, threat restraining, and their theoretical and applied techniques. This special issue focuses on, but certainly not limited to, the following distributed system research: - Cybersecurity dynamics theory in distributed systems - Cybersecurity dynamics simulation in distributed systems - Cybersecurity dynamics modelling technologies in distributed systems - Malware propagation laws and control strategy - Rumour propagation laws and control strategy - Positive/Negative information propagation laws in online social networks - Rumour promoting/control strategy in online social networks - Influential node identification algorithms in large-scale networks - Virus source identification algorithms in various distributed systems - Virus propagation theory based on real data - Application of virus propagation theory in various distributed systems - Risk assessment based on cybersecurity dynamics - Source identification based on cybersecurity dynamics - Cybersecurity dynamics in social communities
Last updated by Dou Sun in 2017-08-05
Special Issue on Exascale Applications and Software 2018
Submission Date: 2018-07-31

From large scale problems such as understanding the structure of the universe and simulating weather, to the nano scale required in designing pharmaceuticals, high performance computing enables many areas of modern science and engineering. Both simulation and the rapidly expanding field of data science are driving the HPC community towards its next milestone: exascale. An exascale computing facility is one that is capable of performing 10^18 floating point operations per second; this will be likely be in production by 2025 and will afford new insights and enable scientific discoveries that hitherto have been unreachable. While the potential system architectures are still evolving, one can safely assume that they will be largely based on complex arrangements of processing units, likely with much greater heterogeneity than we are used to today, with similarly complex deep memory hierarchies. On the software side the tools, libraries and runtimes will have to adapt to this environment, supporting millions of threads of execution and addressing concerns around reliability and the scalability of I/O systems that will likely have to move on from the now-standard POSIX model. The challenge of designing and implementing applications that will efficiently use these platforms involves developers from all levels of the software stack: applications, libraries, programming models, and system software. The aim of this special issue is to bring together developers from many different application fields and levels in the software stack to present work on the problems encountered and progress made on the road to exascale. Topics of interest include but are not limited to: - enabling and optimising applications for exascale in any area; - developing and enhancing algorithms for exascale systems; - aiding the exploitation of massively parallel systems through tools, e.g. performance analysis, debugging, development environments; - programming models and libraries for exascale; - exascale runtimes and system software; - evaluating best practice in HPC concerning large-scale facilities and application execution; - novel uses of current generation and future exascale systems; - new hardware technologies and their exploitation to solve exascale challenges.
Last updated by Dou Sun in 2017-10-16
Related Publications
Related Conferences
CCFCOREQUALISShortFull NameSubmissionNotificationConference
baa1HPDCInternational ACM Symposium on High-Performance Parallel and Distributed Computing2018-01-172018-03-292018-06-11
b2FoIKSInternational Symposium on Foundations of Information and Knowledge Systems2015-10-252015-12-042016-03-07
MIWAIMulti-Disciplinary International Workshop on Artificial Intelligence2016-08-222016-09-092016-12-07
bFUNInternational conference on Fun with Algorithms2012-01-232012-02-202012-06-04
cbb1SECInternational Conference on ICT Systems Security and Privacy Protection2017-01-092017-02-242017-05-29
baHOT CHIPSSymposium on High Performance Chips2017-04-072017-05-012017-08-20
ba2ICCDInternational Conference on Computer Design2017-06-162017-09-012017-11-05
ca2ASP-DACAsia and South Pacific Design Automation Conference2016-07-082016-09-122017-01-16
cb3ParCoInternational Conference on Parallel Computing2015-02-282015-05-152015-09-01
baa1ICDCSInternational Conference on Distributed Computing Systems2017-12-052018-03-282018-07-02