Journal Information
Information Processing and Management
https://www.sciencedirect.com/journal/information-processing-and-management
Impact Factor:
7.400
Publisher:
Elsevier
ISSN:
0306-4573
Viewed:
10222
Tracked:
5
Call For Papers
This journal is ranked by The Chartered Association of Business Schools' Academic Journal Guide, Australian Business Deans Council, Chinese Academy of Sciences (CAS), China Computer Federation (CCF), BFI (Denmark), Computing Research & Education (CORE) Journal Ranking, The Publication Forum (Finland), Science Citation Index Expanded, Social Sciences Citation Index, Scopus, and SCImago Journal Rank (SJR).

Information Processing and Management publishes cutting-edge original research at the intersection of computing and information science concerning theory, methods, or applications in a range of domains, including but not limited to advertising, business, health, information science, information technology marketing, and social computing.

The journal aims to serve the interests of primary researchers but also practitioners in furthering knowledge at the intersection of computing and information science by providing an effective forum for the timely dissemination of advanced and topical issues. The journal is especially interested in original research articles, research survey articles, research method articles, and articles addressing critical applications of research.

Specifically, the journal is interested in four types of manuscripts, which are:

    Research manuscripts addressing topics at the intersection of computer and information science.
    Methods manuscripts focusing on the application of novel methods at the intersection of computer and information science.
    Review manuscripts assessing, in a critical and in-depth manner, a broad trend at the intersection of computer and information science, providing integration of the prior research, and recommendations for further work in the area.
    Critical application manuscripts concerning system design research at the intersection of computer and information science.
Last updated by Dou Sun in 2024-07-13
Special Issues
Special Issue on Large Language Models and Data Quality for Knowledge Graphs
Submission Date: 2024-09-01

Knowledge Graphs (KGs) have become crucial for virtual assistants, web search, and organizational data comprehension in recent years. Examples include Wikidata, DBpedia, YAGO, and NELL, which large companies utilize for data organization. Building KGs involves AI areas like data integration, cleaning, named entity recognition, relation extraction, and active learning. However, automated methods often result in sparse and inaccurate KGs. Evaluating KG quality is vital for insights, refining construction processes, and ensuring accurate information for downstream applications. Despite its significance, there's limited research on data quality and evaluation for KGs at scale. Large Language Models (LLMs) present opportunities and challenges for KG construction and evaluation, bridging human-machine capabilities. Integrating LLMs into KG systems can enhance context-awareness but may introduce mis/disinformation. Managing LLM hallucinations is crucial to prevent KG pollution. Investigating LLMs and quality evaluation integration has potential, as seen in relevance judgments for information retrieval. The special issue advocates human-machine collaboration for KG construction and evaluation, emphasizing the intersection of KGs and LLMs. Submissions are encouraged on LLMs in KG systems, KG quality evaluation, and quality control systems for KG and LLM interactions in research and industry. Topics include KG construction, LLM use in KG generation, deploying LLMs on large-scale KGs, efficient KG quality assessment, human-in-the-loop architectures, domain-specific applications, and industry-scale KG maintenance. The issue aims to advance understanding and application of KGs and LLMs, fostering innovation in this evolving intersection. Guest editors: 1. Dr. Gianmaria Silvello (Managing Guest Editor)University of Padua, Department of Information Engineering, Padua, Italy. 2. Dr. Omar Alonso Amazon, Palo Alto, California, United States of America. 3. Dr. Stefano Marchesin University of Padua, Department of Information Engineering, Padua, Italy. Special issue information: In recent years, Knowledge Graphs (KGs), encompassing millions of relational facts, have emerged as central assets to support virtual assistants and search and recommendations on the web. Notable examples are Wikidata, DBpedia, YAGO, and NELL. Moreover, KGs are increasingly used by large companies and organizations to organize and comprehend their data, with industry-scale KGs fusing data from various sources for downstream applications. Building KGs involves data management and artificial intelligence areas, such as data integration, cleaning, named entity recognition and disambiguation, relation extraction, and active learning [1, 2]. However, the methods used to build these KGs involve automated components that could be better, resulting in KGs with high sparsity and incorporating several inaccuracies and wrong facts. As a result, evaluating the KG quality plays a significant role, as it serves multiple purposes – e.g., gaining insights into the quality of data, triggering the refinement of the KG construction process, and providing valuable information to downstream applications. In this regard, the information in the KG must be correct to ensure an engaging user experience for entity-oriented services like virtual assistants. Despite its importance, there is little research on data quality and evaluation for KGs at scale [3]. In this context, the rise of Large Language Models (LLMs) opens up unprecedented opportunities – and challenges – to advance KG construction and evaluation, providing an intriguing intersection between human and machine capabilities. On the one hand, integrating LLMs within KG construction systems could trigger the development of more context-aware and adaptive AI systems. At the same time, however, LLMs are known to hallucinate and can thus generate mis/disinformation, which can affect the quality of the resulting KG. In this sense, reliability and credibility components are of paramount importance to manage the hallucinations produced by LLMs and avoid polluting the KG. On the other hand, investigating how to combine LLMs and quality evaluation has excellent potential, as shown by promising results from using LLMs to generate relevance judgments in information retrieval [4, 5]. Thus, this special issue promotes novel research on human-machine collaboration for KG construction and evaluation, fostering the intersection between KGs and LLMs [6, 7]. To this end, we encourage submissions related to using LLMs within KG construction systems, evaluating KG quality, and applying quality control systems to empower KG and LLM interactions on both research- and industrial-oriented scenarios. Possible topics of submissions Potential topics include but are not limited to the following: KG construction systems Use of LLMs for KG generation Efficient solutions to deploy LLMs on large-scale KGs Quality control systems for KG construction KG versioning and active learning Human-in-the-loop architectures Efficient KG quality assessment Quality assessment over temporal and dynamic KGs Redundancy and completeness issues Error detection and correction mechanisms Benchmarks and Evaluation Domain-specific applications and challenges Maintenance of industry-scale KGs LLM validation via reliable/credible KG data Important dates Submissions open: 1 June 2024 Submissions close: 1 September 2024 Keywords: Large Language Models, Knowledge Bases, Knowledge Graphs, Data Quality, Information Extraction, Relation Extraction
Last updated by Dou Sun in 2024-07-13
Special Issue on Understanding Human Behaviors Through Large Language Models
Submission Date: 2024-09-30

This special issue embraces studies on human behavior and opinion simulation using LLMs across multidisciplinary fields to enhance the understanding of humans. We aim not only to spotlight the innovative uses of LLMs in understanding human behavior but also to critically assess their role as a tool in the broader research environment. Guest editors: Dr. Jang Hyun Kim Department of Applied Artificial Intelligence, Sungkyunkwan University, Seoul, Korea Dr. Xiao-Liang Shen School of Information Management, Wuhan University, Wuhan, People's Republic of China Dr. Hyejin Youn Kellogg School of Management, Northwestern University, Evanston, Illinois, United States Special issue information: The research exploring and understanding human behavior and opinions has traditionally been conducted through methodologies, such as experiments, surveys, and opinion polls. However, studies involving human participants have recently encountered limitations due to difficulties in recruiting, high costs, and challenges in sample representativeness. In response to these burgeoning issues, a novel research paradigm is emerging, pivoting towards utilizing Large Language Models (LLMs) to simulate human behavior and decision-making processes.LLMs have demonstrated an ability to reflect social norms, background knowledge, and even the biases and stereotypes that permeate human societies. There is a scarcity of research on how suitable LLMs are as subjects for mimicking human behavior/opinions in terms of their generalizability and applicability for deployment in actual research. Considering the rapidly evolving capabilities and inherently opaque mechanisms of LLMs (often called the "black box" problem), there is a need for academic discourse on this subject. Therefore, this special issue embraces studies on human behavior and opinion simulation using LLMs across multidisciplinary fields to enhance the understanding of humans. Possible subjects of submissions could include, but are not limited to: - Human sub-population simulation using LLMs - Measuring human value sets and behaviors using LLMs - Explore LLMs’ personal traits - Evaluate the capabilities of LLMs for understanding human society - Investigate/mitigate inherent social biases in LLMs - Agent-based modeling using LLMs - Replicating traditional experiments using LLMs - Explainable AI to explain human behaviors and value sets Important dates Submissions open: 23 May 2024 Submissions close: 30 September 2024 Keywords: Large language model (LLM), human behavior, simulation, natural language processing (NLP)
Last updated by Dou Sun in 2024-07-13
Related Journals
CCFFull NameImpact FactorPublisherISSN
bInformation Processing & Management7.466Elsevier0306-4573
ACM Transactions on Privacy and Security1.974ACM2471-2566
International Journal of Information Management20.10Elsevier0268-4012
cInformation & Management8.200Elsevier0378-7206
Information Systems Management2.042Taylor & Francis1058-0530
cInformation Processing Letters0.700Elsevier0020-0190
Information Technology and Management1.533Springer1385-951X
Modeling, Identification and ControlThe Research Council of Norway0332-7353
Journal of Global Information Management1.222IGI Global1062-7375
Graphs and Combinatorics0.498Springer0911-0119
Related Conferences
CCFCOREQUALISShortFull NameSubmissionNotificationConference
baa1CIKMACM International Conference on Information and Knowledge Management2024-05-132024-07-162024-10-21
ICMCCEInternational Conference on Mechanical, Control and Computer Engineering2020-12-22 2020-12-25
ICEITSAInternational Conference on Electronic Information Technology and Smart Agriculture2021-10-312021-11-152021-12-10
cab1IPCOInternational Conference on Integer Programming and Combinatorial Optimization2023-11-062024-01-262024-07-03
TSAInternational Conference on Trustworthy Systems and Their Applications2016-08-152016-08-292016-09-19
cab1NetworkingInternational Conferences on Networking2024-01-302024-03-312024-06-03
ab1ICLPInternational Conference on Logic Programming2022-01-142022-03-142022-07-31
AE'International Conference on Advances in Engineering2021-12-052021-12-102021-12-18
ALLSENSORSInternational Conference on Advances in Sensors, Actuators, Metering and Sensing2023-02-012023-02-282023-04-24
ACIT'International Conference on Applied Computing & Information Technology2019-02-042019-02-182019-05-29
Recommendation