Journal Information
Journal of Parallel and Distributed Computing
http://www.journals.elsevier.com/journal-of-parallel-and-distributed-computing/
Impact Factor:
1.93
Publisher:
ELSEVIER
ISSN:
0743-7315
Viewed:
10473
Tracked:
30

Advertisment
Call For Papers
The Journal of Parallel and Distributed Computing (JPDC) is directed to researchers, scientists, engineers, educators, managers, programmers, and users of computers who have particular interests in parallel processing and/or distributed computing. The goal of the journal is to publish in a timely manner original research, critical review articles, and relevant survey papers on the theory, design, implementation, evaluation, programming, and applications of parallel and/or distributed computing systems. The journal provides an effective forum for communication among researchers and practitioners from various scientific areas working in a wide variety of problem areas, sharing a fundamental common interest in improving the ability of parallel and distributed computer systems to solve increasing numbers of difficult and complex problems as quickly and as efficiently as possible.

The scope of the journal includes (but is not restricted to) the following topics as they relate to parallel and/or distributed computing:

• Theory of parallel and distributed computing
• Parallel algorithms and their implementation
• Innovative computer architectures
• Shared-memory multiprocessors
• Peer-to-peer systems
• Distributed sensor networks
• Pervasive computing
• Optical computing
• Software tools and environments
• Languages, compilers, and operating systems
• Fault-tolerant computing
• Applications and performance analysis
• Bioinformatics
• Cyber trust and security
• Parallel programming
• Grid computing
Last updated by Xin Yao in 2017-08-21
Special Issues
Special Issue on High-Performance Computing in Edge Computing Networks
Submission Date: 2017-10-01

High-performance computing (HPC) describes the application of parallelization and distribution algorithms or techniques to connected computing units, to perform more complex tasks in a faster manner than a single unit could do alone. Over the past two decades, the operation speeds of HPC has increased exponentially. Riding on such growth, while HPC continuously advances in its traditional domain of theoretical science and software development, it is increasingly becoming a prevalent solution to a wide range of emerging telecommunication technologies. Edge Computing Networks and Telecommunication technologies support information transmission over significant distance via distributed and connected communication devices. Rapid innovations in transmission, switching, processing, analyzing, and retrieval of information are vital for the success of a wide range of emerging telecommunication technologies, including connected sensors and IoT devices, smart grid, smart cities, software-defined networks, network function virtualization, data-driven cognitive networking, cyber security, green communications, etc. The class of computational problems that need to be tackled, such as combinatorial optimization, agent-based modelling, massive data analysis, parallel discrete event network simulations, pose new challenges in design or develop the above emerging telecommunication technologies. HPC is essential to address these computational challenges. Greatly relevant is the contribution that modelling and evaluation techniques (both analytical and simulation based) may bring to the design and the implementation of such complex HW/SW architectures, to dominate various concurring requirements and guide decision processes that lead to an effective result. The aim of this special issue is to explore how HPC as a research tool enhances emerging telecommunication technologies, and hence present a completing panorama of the state-of-the-art quality research efforts on applying HPC to telecommunications. In addition, the discussion of future and emerging challenges, advances, and applications of HPC and related technologies are all the interests of this special issue. It is hoped that the published research issues or solution guidelines will inspire further research in this very important area of various telecommunication system or algorithm design, as well as provide comprehensive information for researchers, students in ICT, program developers, military and government organizations, and business professionals.
Last updated by Dou Sun in 2017-06-18
Special Issue on Cloud-of-Things and Edge Computing: Recent Advances and Future Trends
Submission Date: 2018-01-30

In recent years, Cloud-assisted Internet of Things (Cloud-of-Things or in short CoT) has emerged as a revolutionary paradigm that enables intelligent and self-configuring (smart) IoT devices and sensors to be connected with the cloud through the Internet. This new paradigm enables the resource-constrained IoT devices to get the benefit from Cloud's powerful and scalable high-performance computing and massive storage infrastructure for real-time processing and storing of the IoT data as well as analysis of the processed information under context using inherently complex models. At the same time, cloud can benefit from IoT by allowing its users to build applications that can use and handle these smart IoT objects interconnected and controlled through software services using cloud infrastructure. Thus, the CoT paradigm can stimulate the development of innovative and novel applications to various areas such as smart cities, smart homes, smart grids, smart agriculture, smart transportation, smart healthcare, etc. to improve all aspects of people's life. However, currently the CoT paradigm is facing increasing difficulty to handle the Big data that IoT generates from these application use cases. As billions of previously unconnected devices are now generating more than two exabytes of data each day, it is challenging to ensure low latency and network bandwidth consumption, optimal utilization of computational recourses, scalability and energy efficiency of IoT devices while moving all data to the cloud. Hence, in recent times, this centralized CoT model is undergoing a paradigm shift towards a decentralized model termed as edge computing, that allows data computing, storage and service supply to be moved from Cloud to the local edge devices such as smartphones, smart gateways or routers and local PCs that can offer computing and storage capabilities on a smaller scale in real-time. Edge computing pushes data storage, computing and controls closer to the data source(s) instead of performing in a centralized local server or devices as in the case of Fog computing; therefore, enables each edge device to play its own role of determining what information should be stored or processed locally and what needs to be sent to the cloud for further use. Thus, edge computing complements CoT paradigm in terms of high scalability, low delay, location awareness, and allowing of using local client computing capabilities in real time. While researchers and practitioners have been making progress within the area of edge computing, still there exists several issues that need to be addressed for CoT paradigm. Some of these issues are: novel network architecture and middleware platform for edge and CoT paradigm considering emerging technologies such as 5G wireless networks, semantic computing; edge analytics for Big data; novel security and privacy methods; social intelligence into the edge node to host CoT applications; and context-aware service management on the edge with effective quality of service (QoS) support and other issues.
Last updated by Dou Sun in 2017-06-18
Special Issue on Transmissible Cyber Threats in Distributed Systems
Submission Date: 2018-01-31

Cyberspace faces a range of threats that can rapidly spread in various kinds of distributed systems. Examples include rumors spreading in social media, computer viruses on the Internet, and unexpected failures causing rolling blackouts in Smart Grids. These threats fall into the category of transmissible cyber threats due to their spreading nature in distributed environments. Each year, transmissible cyber threats have caused tremendous financial losses and damages to users in various distributed systems. For example, malware is a type of transmissible cyber threats, which has become one of the most concerning security issues in cyberspace. In another example, a fake Associated Press (AP) news release (i.e. a rumor spreading in Twitter) about a bomb exploding in the White House in 2013 led to a 1% drop in the Standard & Poor’s 500 index, temporarily wiping out US$136 Billion dollars. Recently, researchers have found that the average time required to contain a transmissible cyber-attack was 31 days and it cost US$639,462 in distributed systems such as Internet. The costs were so high and can get higher as transmissible cyber threats grow more sophisticated and take longer time to resolve. In distributed system field, transmissible cyber threats have been extensively studied and received considerable attention in recent years, as witnessed by the number of related publications. Among these works, modelling and restraining are the core research issues. On one hand, the advances of these techniques will benefit the development of defense technologies against transmissible cyber threats in various distributed systems. For example, researchers use propagation models to develop robust techniques to trace threat diffusion sources and disclose potential diffusion paths. Modelling and restraining techniques can also help develop risk assessment methods on exposing compromised Internet users. In addition, modelling will contribute to develop interactive algorithms to capture the threat dynamics and examine the effectiveness of defense strategies. On the other hand, due to their scale, complexity and heterogeneity in distributed systems, a number of technical and practical challenges in this field have not been addressed. For example, current modelling techniques have not been widely accepted as common exercises due to their limitations on presenting key attributes of real-world propagation dynamics. These attributes such as Time Zone and geographical factors greatly influence the modelling accuracy and restraining efficiency. We therefore organize this special issue to reduce the gap between practices and academic research. This special issue will bring together researchers in distributed system field to publish state-of-art research findings in transmissible cyber threats, particularly focusing on propagation modelling, threat restraining, and their theoretical and applied techniques. This special issue focuses on, but certainly not limited to, the following distributed system research: - Cybersecurity dynamics theory in distributed systems - Cybersecurity dynamics simulation in distributed systems - Cybersecurity dynamics modelling technologies in distributed systems - Malware propagation laws and control strategy - Rumour propagation laws and control strategy - Positive/Negative information propagation laws in online social networks - Rumour promoting/control strategy in online social networks - Influential node identification algorithms in large-scale networks - Virus source identification algorithms in various distributed systems - Virus propagation theory based on real data - Application of virus propagation theory in various distributed systems - Risk assessment based on cybersecurity dynamics - Source identification based on cybersecurity dynamics - Cybersecurity dynamics in social communities
Last updated by Dou Sun in 2017-08-05
Related Publications
Advertisment
Related Conferences
CCFCOREQUALISShortFull NameSubmissionNotificationConference
baa1HPDCInternational ACM Symposium on High-Performance Parallel and Distributed Computing2017-01-102017-03-292017-06-26
b2FoIKSInternational Symposium on Foundations of Information and Knowledge Systems2015-10-252015-12-042016-03-07
MIWAIMulti-Disciplinary International Workshop on Artificial Intelligence2016-08-222016-09-092016-12-07
bFUNInternational conference on Fun with Algorithms2012-01-232012-02-202012-06-04
baHOT CHIPSSymposium on High Performance Chips2017-04-072017-05-012017-08-20
cbb1SECInternational Conference on ICT Systems Security and Privacy Protection2017-01-092017-02-242017-05-29
ba2ICCDInternational Conference on Computer Design2017-06-162017-09-012017-11-05
ca2ASP-DACAsia and South Pacific Design Automation Conference2016-07-082016-09-122017-01-16
cb3ParCoInternational Conference on Parallel Computing2015-02-282015-05-152015-09-01
baa1ICDCSInternational Conference on Distributed Computing Systems2017-12-052018-03-282018-07-02
Recommendation