Journal Information
Information Sciences
http://www.journals.elsevier.com/information-sciences/
Impact Factor:
5.524
Publisher:
Elsevier
ISSN:
0020-0255
Viewed:
17668
Tracked:
74

Call For Papers
Information Sciences will publish original, innovative and creative research results. A smaller number of timely tutorial and surveying contributions will be published from time to time.

The journal is designed to serve researchers, developers, managers, strategic planners, graduate students and others interested in state-of-the art research activities in information, knowledge engineering and intelligent systems. Readers are assumed to have a common interest in information science, but with diverse backgrounds in fields such as engineering, mathematics, statistics, physics, computer science, cell biology, molecular biology, management science, cognitive science, neurobiology, behavioural sciences and biochemistry.

The journal publishes high-quality, refereed articles. It emphasizes a balanced coverage of both theory and practice. It fully acknowledges and vividly promotes a breadth of the discipline of Informations Sciences.

Topics include:

Foundations of Information Science:
Information Theory, Mathematical Linguistics, Automata Theory, Cognitive Science, Theories of Qualitative Behaviour, Artificial Intelligence, Computational Intelligence, Soft Computing, Semiotics, Computational Biology and Bio-informatics.

Implementations and Information Technology:
Intelligent Systems, Genetic Algorithms and Modelling, Fuzzy Logic and Approximate Reasoning, Artificial Neural Networks, Expert and Decision Support Systems, Learning and Evolutionary Computing, Expert and Decision Support Systems, Learning and Evolutionary Computing, Biometrics, Moleculoid Nanocomputing, Self-adaptation and Self-organisational Systems, Data Engineering, Data Fusion, Information and Knowledge, Adaptive ad Supervisory Control, Discrete Event Systems, Symbolic / Numeric and Statistical Techniques, Perceptions and Pattern Recognition, Design of Algorithms, Software Design, Computer Systems and Architecture Evaluations and Tools, Human-Computer Interface, Computer Communication Networks and Modelling and Computing with Words

Applications:
Manufacturing, Automation and Mobile Robots, Virtual Reality, Image Processing and Computer Vision Systems, Photonics Networks, Genomics and Bioinformatics, Brain Mapping, Language and Search Engine Design, User-friendly Man Machine Interface, Data Compression and Text Abstraction and Summarization, Virtual Reality, Finance and Economics Modelling and Optimisation
Last updated by Dou Sun in 2019-11-24
Special Issues
Special Issue on Robust Recognition Systems against Adversarial Attacks
Submission Date: 2020-07-15

The recognition systems can be inevitably affected by noisy or polluted data caused by accidental outliers, transmission loss, or even adversarial attacks. Unlike random noise with low corruption ratio, the adversarial attacks can be arbitrary, unbounded and do not follow any specific distribution. Most existing recognition systems are highly vulnerable to adversarial examples, i.e., samples of input data modified very slightly to fool classifiers or other models. In many cases, these modifications can be so subtle that a human observer does not even notice the modification at all, yet the system still makes a mistake, even if the adversary has no access to the underlying system. A single incorrect inference would be expensive for recognition systems related to privacy or security, such as biometric recognition and autonomous vehicle. Therefore, there is a need to analyze adversarial phenomenon in computer vision field, and thus enhance the robustness of recognition system. Though booming recently, there are so many challenges lie in robust recognition system. Reasons of adversarial vulnerability need more investigation. More reasonable evaluation criterions on robustness of deep neural networks are needed. The transferability of adversarial example has yet been well explained and exploited in existing research. This special issue serves as a forum for researchers all over the world to discuss their efforts and recent advances in robust recognition system and its applications. Both state-of-the-art works, as well as literature reviews, are welcome for submission. Especially, to provide readers of the special issue with a state-of-the-art background on the topic, we will invite one survey paper, which will undergo peer review. This special issue seeks to present and highlight the latest developments on practical methods of robust recognition systems via adversarial defense methods. Papers addressing interesting real-world applications are especially encouraged. Topics of interest include, but are not limited to, Theoretical analysis on vulnerability of recognition systems Existence and transferability of adversarial examples Adversarial robustness of recognition systems Adversarial training against adversarial examples Evaluation approaches for robustness to adversarial examples Defense against transfer-based attack toward computer vision and recognition Robust system verification against adversarial attacks Robust optimization methods for training models Data denoising methods on deep neural networks Interpretability of adversarial attacks and defenses Multimodal recognition systems as a defense Real-world applications of robust recognition systems and adversarial defenses, e.g., design of robust adversarial detectors, adversarial patches in computer vision applications, and physical defenses etc.
Last updated by Dou Sun in 2020-04-10
Related Conferences
Recommendation