Journal Information
Information and Software Technology
http://www.journals.elsevier.com/information-and-software-technology/
Impact Factor:
1.569
Publisher:
ELSEVIER
ISSN:
0950-5849
Viewed:
5096
Tracked:
4

Advertisment
Call For Papers
Information and Software Technology is the international archival journal focusing on research and experience that contributes to the improvement of software development practices. The journal's scope includes methods and techniques to better engineer software and manage its development. Articles submitted for review should have a clear component of software engineering or address ways to improve the engineering and management of software development. Areas covered by the journal include:

• Software management, quality and metrics,
• Software processes,
• Software architecture, modelling, specification, design and programming
• Functional and non-functional software requirements
• Software testing and verification & validation
• Empirical studies of all aspects of engineering and managing software development 
Last updated by Dou Sun in 2017-01-03
Special Issues
Special Issue on Conducting Empirical Studies in Industry
Submission Date: 2017-04-27

Empirical studies lie at the heart of the Software Engineering (SE) discipline. When innovative solutions are created, they need to be validated for use in production environments; otherwise there are numerous risks to both the solution providers and users. Such validation undergoes empirical procedures where specific quality-related objectives are set, solution is deployed, appropriate data is gathered and analysed, results are interpreted in real-world contexts, and conclusions drawn. Likewise, knowledge-seeking studies (e.g., case studies and experiments) are conducted using empirical procedures, where eventual findings and conclusions are used to make decisions and to create new products, processes, services, tools and technologies. Flaws in empirical procedures can thus induce risks in the quality of the solutions created or in the findings and conclusions of studies. With ever increasing societal dependence on technologies, systems and services, it is becoming imperative that empiricism be an integral part of knowledge discovery and solution creation if we are to serve society well. However, empirical studies conducted in industrial settings are particularly challenging because the actual environments are complex and what is first observable by researchers is often only a tip of the iceberg. Also, it can take a considerable amount of time to obtain actual project data, due to security and privacy concerns, and access to project people can be difficult. Yet, expectations from those commissioning the studies are high in terms of producing relevant results in a short space of time. From the researchers' point of view, there is thus often a tension between following rigour in empirical procedures (which take time and resources) and the need to be agile (which challenges circumspection and need for details). This special issue seeks original papers addressing the theme of conducting empirical studies in industry. All matters within the boundary of this theme are relevant for this issue and are thus encouraged. Topics of interest include, but are not limited to: stakeholder involvement in empirical studies; impact of industrial settings on the design of, and on conducting, empirical studies; interpreting results in industrial contexts; dealing with threats in designing and conducting empirical studies in industry; complexity in breaking ice with industry to engage in conducting studies in industry; choosing the right topics to align with priority in industry or a specific organisation; replication of empirical studies in different industrial settings; empirical studies on such subjects as embedded software, cloud computing, Big Data and Analytics, and others.
Last updated by Dou Sun in 2017-01-14
Special Issue on Enhancing Credibility of Empirical Software Engineering
Submission Date: 2017-05-20

AIM AND SCOPE Researchers continuously struggle to provide sufficient evidence regarding the credibility of their findings. At the same time, practitioners have difficulties in trusting the results with limited credibility. Probably the most striking summary of the research crisis in multiple disciplines is given by Ioannidis who in his seminal paper [3] (with more than 4000 citations) claims that “Most Research Findings Are False for Most Research Designs and for Most Fields”. According to Gartner, the size of the worldwide software industry in 2013 was US$407.3 billion [1]. Hence, invalid recommendations or missing research findings in software engineering can cost a lot of money. Problems with the credibility of research findings are not absent in software engineering as well. For example, Shepperd et al.[7] meta-analysed 600 experimental results drawn from primary studies that compared methods for predicting fault-proneness. They found that the explanatory factor that accounted for the largest percentage of the differences among studies (i.e., 30%) was research group. In contrast prediction method, which was the main topic of research, accounted for only 1.3% of the variation among studies. Hence, they commented that there seems little point in conducting further primary studies until the problem that “it matters more who does the work than what is done” can be satisfactorily addressed. This special issue focuses on the two complementary and important areas in software engineering research: 1) reproducible research, and 2) modern statistical methods. Reproducible research refers to the idea that the ultimate product of research is the paper plus its computational environment. That is, a reproducible research document incorporates the textual body of the paper plus the data used by the study, and the analysis steps (algorithms) used to process the data. The reason for adding the whole computational environment is that other researchers then can repeat the studies and reproduce the results, which in turn would deliver more credible (trustworthy) results. Reproducibility is a crucial aspect of credible research. Unfortunately, it is often impossible to reproduce data analyses, due to lack of raw data, sufficient summary statistics, or undefined analysis procedures. Thus wider adoption of reproducible research would be beneficial for Empirical Software Engineering [6]. Furthermore, true research findings may be missed due to inadequate statistical methods that do not reflect the state of the art in statistics, when modern statistical methods including robust [4], Bayesian [2] and meta-analysis methods [5] are available. Statistical techniques widely used in Empirical Software Engineering studies base, to a large extent, on two fundamental assumptions: normality and homogeneity of variances. These techniques are often considered robust when either of these assumptions is violated. Unfortunately, recent research findings provide evidence that widely used classic methods can be highly unsatisfactory for comparing groups and studying associations [8]. A fundamental problem is that violating the basic assumptions underlying statistical methods can result in relatively low power or missing important features of the data that have practical significance. Low power not only increases the false negatives probability meaning that potentially valuable discoveries may be lost, but also leads to inflated effect sizes for true positives, which can lead to under-powered replications and failure to confirm true results. In addition, Null Hypothesis Statistical Testing (NHST) and p-values remain the standard inferential tool in many disciplines including software engineering, in spite of the availability of alternative more trustworthy approaches, e.g., inference based on confidence intervals (CIs) instead of p-values or Bayesian approaches to avoid the pitfalls of NHST. The aim of the special issue is to stimulate awareness and uptake of recent advances in these crucial areas, reproducible research and modern statistical methods, by the software engineering community. That is to increase the uptake of reproducible research methods and tools, as well as robust, Bayesian and meta-analysis statistical methods. In particular the objective is show examples of empirical software engineering research which employ the aforementioned methods and tools to evaluate software engineering methodologies, practices, technologies and tools to allow more credible evidence-based decisions. TOPICS OF INTEREST We solicit high quality research articles, guidelines and review articles concerning quantitative and/or qualitative empirical software engineering research and practice focused on topics which include, but are not limited to, proposals, uses, reviews and/or evaluations of: Reproducible research tools or methods (e.g., employing reproducible research in empirical software engineering). Statistical methods addressing the pitfalls of the classic statistical methods or leading to more trustworthy results (e.g.,employing robust statistical methods, Bayesian methods or meta-analyses in empirical software engineering). Using both, a reproducible research environment and modern statistical methods (e.g., robust methods, Bayesian methods, meta-analyses) in software engineering (e.g., to perform empirical evaluation of methods, practices, methodologies, technologies and tools in software engineering), would be an additional advantage. Other means to enhance credibility of empirical software engineering are also within the scope of the special issue. SUBMISSION AND IMPORTANT DATES The special issue’s paper submission page is available at https://www.evise.com/profile/#/INFSOF/login. When submitting the manuscript for this special issue, please select “Special issue: Enhancing Credibility of Empirical SE” as the article type. Formatting templates can be found at https://www.elsevier.com/authors/author-schemas/latex-instructions Please note that Information and Software Technology prescribes the use of a “structured abstract” including the following components: Context, Objective, Method, Results and Conclusions. All contributions must not have been previously published or be under consideration for publication elsewhere. A submission extended from a previous conference version must have at least 30% new material.
Last updated by Dou Sun in 2017-01-15
Special Issue on Visual Analytics in Software Engineering
Submission Date: 2017-05-31

In the recent years there has been a tremendous development in the area of handling large quantities of data in different aspects of software product development. One of the areas is the ability to understand the customers and collect the data from the product usage in-field; another area is the ability to use large quantities of product development data to optimise software development flows. As a community we have also made great progress in providing more data from software development systems such as source code management systems (e.g. git), defect management systems (e.g. Jira) and many more. Based on the above two observations we could see that visual analytics in software engineering is gaining importance. The ability to quickly and accurately understand the software and its context is important for software engineers, business analysts, software product managers and other roles involved in software development. In this special issue we recognise the challenges of using the large quantities of data in software engineering and solicit original contributions in the area of using visual analytics in software engineering. We solicit papers that contribute to both the theory and practice of using visual analytics techniques such as software visualisation, mining data from software repositories, and supporting program/artefact comprehension. The goal of the special issue is to provide our software engineering community with the ability to stay up-to-date in the recent developments in the area of visual analytics. We solicit papers in the following topics (but not limited to the list below): - Visualisation to support program comprehension, software testing, and debugging - Tool support for software visual analytics - Integration of software visualisation tools and development environments - Cognitive theories for visual analytics and software comprehension, including experiments, empirical studies, and case studies - Human aspects in visual analytics with application in software engineering - Empirical evaluations of software visual analytics tools, techniques, and approaches - Task-specific visualisation support for software engineering tasks - Interaction techniques and algorithms for software visualisation - Visual analytics and legal issues, such as due diligence, intellectual property, and reverse engineering - Visualisation-based techniques in computer science and software engineering education - Issues and case studies in the transfer of visual analytics and software comprehension technology to industry. - Industrial experience on using software visualisation - Innovative visualisation and visual analytics techniques for software engineering data
Last updated by Dou Sun in 2017-01-14
Related Publications
Advertisment
Related Conferences
CCFCOREQUALISShortFull NameSubmissionNotificationConference
WIMSInternational Conference on Web Intelligence, Mining and Semantics2015-05-162015-06-062015-07-13
ICAISED International Conference on advance Information System, E-Education & Development2012-10-312012-11-052013-11-02
CAInternational Conference on Control and Automation2015-10-102015-10-302015-11-25
ca2ISPDInternational Symposium on Physical Design2016-10-032016-11-172017-03-19
bbSPMSymposium on Solid and Physical Modeling2017-03-032017-04-072017-06-19
WebAppsUSENIX Conference on Web Application Development2012-01-232012-03-262012-06-13
WICTWorld Congress on Information and Communication Technologies2012-08-312012-09-202012-10-27
CSEITInternational Conference on Computer Science, Engineering and Information Technology2016-11-122016-11-302016-12-23
Recommendation