Journal Information
Knowledge-Based Systems (KBS)
Impact Factor:
Call For Papers
Knowledge-Based Systems focuses on systems that use knowledge-based techniques to support human decision-making, learning and action. Such systems are capable of cooperating with human users and so the quality of support given and the manner of its presentation are important issues. The emphasis of the journal is on the practical significance of such systems in modern computer development and usage.

As well as being concerned with the implementation of knowledge-based systems, the journal covers the design process, the matching of requirements and needs to delivered systems and the organisational implications of introducing such technology into the workplace and public life, expert systems, application of knowledge-based methods, integration with conventional technologies, software tools for KBS construction, decision-support mechanisms, user interactions, organisational issues, knowledge acquisition, knowledge representation, languages and programming environments, knowledge-based implementation techniques and system architectures. Also included are publication reviews.
Last updated by Dou Sun in 2021-03-07
Special Issues
Special Issue on Robust, Explainable, and Privacy-Preserving Deep Learning
Submission Date: 2021-08-31

This Special Issue aims to: 1) improve the understanding and explainability of deep neural networks; 2) improve the accuracy of deep learning leveraging new stochastic optimization and neural architecture search; 3) enhance the mathematical foundation of deep neural networks; 4) design new data privacy mechanisms to optimally tradeoff between utility and privacy; and 5) increase the computational efficiency and stability of the deep learning training process with new algorithms that will scale. Potential topics include but are not limited to the following: Novel theoretical insights on the deep neural networks Exploration of post-hoc interpretation methods which can shed light on how deep learning models produce a specific prediction and generate a representation Investigation of interpretable models which aim to construct self-explanatory models and incorporate interpretability directly into the structure of a deep learning model Quantifying or visualizing the interpretability of deep neural networks Stability improvement of deep neural network optimization Optimization methods for deep learning Privacy preserving machine learning (e.g., federated machine learning, learning over encrypted data) Novel deep learning approaches in the applications of image/signal processing, business intelligence, games, healthcare, bioinformatics, and security Important Dates Submission Deadline: August 31, 2021 First Review Decision: September 30, 2021 Revisions Due: October 31, 2021 Final Decision: November 30, 2021 Final Manuscript: December 31, 2021
Last updated by Dou Sun in 2021-02-28
Related Conferences