Journal Information
Information Fusion
Impact Factor:

Call For Papers
The journal is intended to present within a single forum all of the developments in the field of multi-sensor, multi-source information fusion and thereby promote the synergism among the many disciplines that are contributing to its growth. The journal is the premier vehicle for disseminating information on all aspects of research and development in the field of information fusion. Articles are expected to emphasize one or more of the three facets: architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome. The journal publishes original papers, letters to the Editors and from time to time invited review articles, in all areas related to the information fusion arena including, but not limited to, the following suggested topics:

• Data/Image, Feature, Decision, and Multilevel Fusion
• Multi-classifier/Decision Systems
• Multi-Look Temporal Fusion
• Multi-Sensor, Multi-Source Fusion System Architectures
• Distributed and Wireless Sensor Networks
• Higher Level Fusion Topics Including Situation Awareness And Management
• Multi-Sensor Management and Real-Time Applications
• Adaptive And Self-Improving Fusion System Architectures
• Active, Passive, And Mixed Sensor Suites
• Multi-Sensor And Distributed Sensor System Design
• Fusion Learning In Imperfect, Imprecise And Incomplete Environments
• Intelligent Techniques For Fusion Processing
• Fusion System Design And Algorithmic Issues
• Fusion System Computational Resources and Demands Optimization
• Special Purpose Hardware Dedicated To Fusion Applications
• Mining Remotely Sensed Multi-Spectral/Hyper-Spectral Image Data Bases
• Information Fusion Applications in Intrusion Detection, Network Security, Information Security and Assurance arena
• Applications such as Robotics, Space, Bio-medical, Transportation, Economics, and Financial Information Systems
• Real-World Issues such as Computational Demands, Real-Time Constraints in the context of Fusion systems.
Last updated by Dou Sun in 2018-10-24
Special Issues
Special Issue on Information Fusion for Affective Computing and Sentiment Analysis
Submission Date: 2019-10-31

Emotions are intrinsically a part of our mental activity and play a key role in cognitive communication and decision-making processes. Emotion is a chain of events made up of feedback loops. Feelings and behavior can affect cognition, just as cognition can influence feeling. Emotion, cognition, and action interact in feedback loops and emotion can be viewed in a structural model tied to adaptation. Besides being important for the advancement of AI, detecting and interpreting emotional information is key in multiple areas of computer science, e.g., human-agent, -computer, and -robot interaction, but also smart city e-learning, e-health, domotics (home automation), automotive and cyber security, user profiling and personalization etc. In recent years, emotion and sentiment analysis has also become increasingly popular for processing social media data on social networks, online communities, blogs, Wikis, microblogging platforms, and other online collaborative media. The distillation of knowledge from such a big amount of unstructured information, however, is an extremely difficult task, as the contents of today’s Web are perfectly suitable for human consumption, but remain hardly accessible to machines. The opportunity to capture the opinions of the general public about social events, political movements, company strategies, marketing campaigns, and product preferences has raised growing interest both within the scientific community, leading to many exciting open research challenges, as well as in the innovative business world, due to the remarkable benefits to be had from marketing and financial market prediction. Most existing approaches to affective computing and sentiment analysis are still based on the syntactic representation of text, a method that relies mainly on word co-occurrence frequencies. Such algorithms are limited by the fact that they can only process information they can ‘see’. As human text processors, we do not have such limitations as every word we see activates a cascade of semantically related concepts, relevant episodes, emotions, and sensory experiences, all of which enable the completion of complex NLP tasks – such as word-sense disambiguation, textual entailment, and semantic role labeling – in a quick and effortless way. Information fusion can aid to mimic the way humans process and analyze text and, hence, overcome the limitations of standard approaches to affective computing and sentiment analysis. This special issue aims to provide a forum for academic and industrial communities to report recent advances in theoretical, experimental and integrative studies related toInformation Fusion in Affective Computing and Sentiment Analysis- from the perspectives of algorithms, architectures, and applications. Articles are invited to address information fusion challenges in affective computing and sentiment analysis, across a range of interdisciplinary areas, such as machine learning, active learning, transfer learning, deep neural networks, neural and cognitive models, fuzzy logic, evolutionary computation, natural language processing, commonsense reasoning, and big data computing. Manuscripts should cover unpublished researchthat clearly delineate the role of information fusionin the context of affective computing and sentiment analysis.Submitted manuscripts will be judged solely on the basis of new contributions excluding contributions made in earlier publications. Contributions should be described in sufficient detail to be reproducible on the basis of the material presented in the paper and the references cited therein. Topics relevant to the Special Issue include, but are not necessarily limited to: Concept-level sentiment analysis Affective commonsense reasoning Social network modeling and analysis Social media representation and retrieval Multi-lingual emotion and sentiment analysis Aspect extraction for opinion mining Linguistic patterns for sentiment analysis Statistical learning theory for big social data analysis Sarcasm detection Microtext normalization Sentic computing Large commonsense graphs Conceptual primitives for sentiment analysis Multimodal emotion recognition and sentiment analysis Human-agent, -computer, and -robot interaction User profiling and personalization Aided affective knowledge acquisition Time-evolving sentiment tracking
Last updated by Dou Sun in 2019-07-21
Special Issue on Knowledge Graph for Information Fusion
Submission Date: 2020-01-05

Recently, knowledge graph (KG) is the graph-driven representation of real-world entities along with their semantic attributes and their relationships. Over the past few years, we have observed the emerging of many state-of-the-art knowledge graphs, some of which are Cyc and OpenCyc, Freebase, DBpedia, Wikidata, YAGO, and NELL. However, standalone knowledge graphs are of no use unless we integrate them into smart systems. In several well-known industrial services (e.g., Google’s Knowledge Graph, Microsoft’s Satori, and Facebook’s Graph Search), knowledge graph became a backbone for helping these organizations as well as their users fully discovering social knowledge. Particularly, these systems are able to provide hyper-precise information in various applications (e.g., semantic search engine, complex question answering, and users’ behavior comprehension). Regarding the importance of smart systems with knowledge graph, an increasing presence of innovative researches have been recognized to tackle different kinds of industrial domains. Our main goal is to look for high-quality researches that focus on both theoretical papers and practical applications of knowledge graph. In particular, this special issue aims at gathering advanced researches to support constructing state-of-the-art smart systems with knowledge graph, including two main topics of interest: (1) cutting-edge techniques for constructing, managing, and analyzing knowledge graph ensuring its coverage, correctness, and freshness and (2) useful applications of knowledge graph for providing our society with prominent services. Potential topics include, but are not limited to: Construction, management, and analysis of knowledge graph: • Automatic and semi-automatic knowledge graph construction. • Knowledge graph identification (completion, reasoning, or refinement). • Knowledge graph expansion and enrichment. • Knowledge graph embedding. • Knowledge graph understanding and profiling. • Knowledge graph fusion. • Real-time updating knowledge graph. • Storing and querying knowledge graph. • Visualizing knowledge graph. • Deep learning for knowledge graph. Applications of knowledge graph: • Cross-lingual semantic search and ranking. • Autonomous question answering. • Social event detection and disambiguation. • Explainable recommendation systems. • Multi-lingual sentiment analysis. • Large-scale text retrieval, analysis and understanding. • Entity resolution and link prediction. • Automated knowledge representation, inference, and reasoning. • Spatio-temporal pattern discovery. • Real-time edge analytics. • Knowledge-based trust, fraud detection, and cybersecurity. • Knowledge graph as a service.
Last updated by Dou Sun in 2019-07-27
Special Issue on Advances in Multimodality Data Fusion in Neuroimaging
Submission Date: 2020-05-01

Neuroimaging scans, also called brain imaging scans, are being used more and more to help detect and diagnose a few medical disorders and illnesses. Currently, the main use of neuroimaging scans for mental disorders is in research studies to learn more about the disorders. Brain scans alone now are commonly used to diagnose neurological and psychiatric diseases, such as Meningioma, Multiple sclerosis, glioma, Huntington’s disease, Herpes encephalitis, Pick’s disease, Schizophrenia, Alzheimer’s disease, Cerebral toxoplasmosis, Sarcoma, Subdural hematoma¸ etc. To increase the diagnosis accuracy on neurological and psychiatric diseases, multimodal data fusion of neuroimaging scans is expected, as it brings together data from multiple modalities into a common reference frame. In the modern sense, multimodal data fusion is frequently taken to mean the integration of the multimodal 1D/2D/3D/4D data in a common reference anatomical space through co-registration using various image processing methods. The sources of data modalities are from a wide variety of clinical settings, including electrocardiography (ECG), electroencephalography (EEG), magnetic resonance imaging (MRI), magnetic resonance spectroscopy (MRS), electrocorticography (ECoG), functional MRI (fMRI), positron emission tomography (PET), diffusion tensor imaging (DTI), Single Photon Emission Computed Tomography (SPECT), and Magnetic Particle Imaging (MPI). The main aim of advances in multimodality data fusion in neuroimaging is to exploit complementary properties of several single-modality scanning protocols in order to improve each of them considered separately, so as to improve the diagnosis accuracy of neurological and psychiatric diseases. This special issue aims to provide a forum for academic and industrial communities to report recent theoretical and application results related to Advances in Multimodality data fusion in neuroimaging from the perspectives of theories, algorithms, architectures, and applications. Manuscripts (which should be original and not previously published either in full or in part or presented even in a more or less similar form at any other forum) covering unpublished research that report the advances in multimodality data fusion in neuroimaging are invited. The manuscript will be judged solely on the basis of new contributions excluding the contributions made in earlier publications. Contributions should be described in sufficient detail to be reproducible on the basis of the material presented in the paper and the references cited therein. Topics appropriate for this special issue include (but are not necessarily limited to): New techniques, models, algorithms, and clinical experiences for multimodality data fusion systems Deep learning models for multimodality neuroimaging data processing Feature fusion for multimodality intelligent systems Shared multimodality representation learning Improved algorithms for multimodality neuroimaging data fusion systems Analysis on big multimodality data fusion Hierarchical intelligent systems for multimodality data fusion Multimodality data fusion applications for neuroimaging Multimodality data fusion applications in audio areas Multimodality data fusion applications in computer vision related areas Signal processing on graphs for fusion methods Computational issues in fusion methods for real-time bio-signal analysis Heterogeneous sensor fusion in big neuroimaging data context Tensor methods and constraint techniques for multimodality data fusion
Last updated by Dou Sun in 2019-09-28
Related Journals
CCFFull NameImpact FactorPublisherISSN
aInformation and Computation1.077ELSEVIER0890-5401
cIET Information Security0.862IET1751-8709
aIEEE Transactions on Information Theory2.728IEEE0018-9448
bInformation Systems2.551ELSEVIER0306-4379
aACM Transactions on Information Systems ACM1046-8188
bInformation Sciences4.305ELSEVIER0020-0255
ACM Computing SurveysACM0360-0300
cInformation Retrieval0.896Springer1386-4564
cInformation and Management3.89ELSEVIER0378-7206
Information Systems and e-Business Management0.791Springer1617-9846
Related Conferences
CCFCOREQUALISShortFull NameSubmissionNotificationConference
Petri NetsInternational Conference on Application and Theory of Petri Nets and Concurrency2019-01-162019-03-082019-06-23
aa*a1SecurityUSENIX Security Symposium2020-02-152020-03-152020-08-12
bba2SCAACM SIGGRAPH/Eurographics Symposium on Computer Animation2015-04-152015-05-312015-08-07
cIFIPTMIFIP WG 11.11 International Conference on Trust Management2019-04-092019-05-122019-07-17
INFONORInternational Conference on Computing and Informatics in Northern Chile2019-06-032019-06-302019-08-21
baa1ICALPInternational Colloquium on Automata, Languages and Programming2020-02-122020-04-152020-07-08
bMoMMInternational Conference on Advances in Mobile Computing & Multimedia2019-08-152019-09-252019-12-02
CMSMEInternational Joint Conference on Materials Science and Mechanical Engineering2019-10-252019-10-312020-01-10