Conference Information
ESEM 2026: International Symposium on Empirical Software Engineering and Measurement
Please Login to view website of conference

Submission Date:
2026-04-22
Notification Date:
2026-06-30
Conference Date:
2026-10-04
Location:
Munchen, Germany
Years:
20
CCF: b   CORE: a   QUALIS: a2   Viewed: 126876   Tracked: 64   Attend: 5

Call For Papers
The International Symposium on Empirical Software Engineering and Measurement (ESEM) technical papers (main) track features submissions that describe original, unpublished work in software engineering and software measurement, with a strong empirical foundation. Papers in this track should communicate fully developed research and results. Strong emphasis should be given to the methodological aspects of the research and the assessment of the validity of the contributions.

Please note:

Open Science policy: ESEM is open by default. Submissions must comply with the Open Science Policy and provide the artifacts needed to understand and reproduce the analysis, unless legal/ethical barriers exist (see details on Data Availability below).
Double-anonymous review format: ESEM 2026 uses double blind reviewing. Submissions must not reveal authors’ identities.

General scope of submissions

Submissions must not be under consideration for publication or presentation elsewhere. In addition to the specific scope of this track, submissions may address any aspect of software engineering but must tackle the problem from an empirical perspective and using a rigorous empirical method, including (but not limited to):

Empirical studies using qualitative, quantitative, and mixed methods
Cross- and multi-disciplinary methods and studies
Controlled experiments and quasi-experiments
Case studies, action research, ethnography, and field studies
Survey research
Simulation studies
Artifact studies
Data mining using statistical, machine learning, and AI-based approaches
Meta-studies and evidence syntheses, including:
Systematic literature reviews and rapid reviews with a strong synthesis component and a clear contribution beyond overview summarization
Meta-analyses, and qualitative, quantitative, or structured syntheses of prior studies that generate clear new insights
Replication studies

Papers should be positioned in terms of research methodology and contribution in relation to established frameworks for empirical software engineering and measurement.

Negative results are also welcome as long as the research is based on well-motivated hypotheses in line with the state-of-the-art, relevant scientific knowledge that is then well-documented, analysed in depth, and provides clear value to the community.

Topics of interest (illustrative, not exhaustive)

Topics in scope for ESEM 2026 and commonly addressed using an empirical approach include, but are not limited to:

Evaluation and comparison of software models, tools, techniques, and practices
Modeling, measuring, and assessing product or process quality and productivity
Continuous software engineering
Software verification and validation, including analysis and testing
Engineering of software systems that include machine learning components and data dependencies
Applications of software engineering to different types of systems and domains (e.g., IoT, cyber-physical systems, Industry 4.0)
Human factors, teamwork, and behavioral aspects of software engineering

We also welcome submissions that relate to meta-topics around empirical software engineering research, including for example:

Development, evaluation, and comparison of empirical approaches and methods
Infrastructure, techniques, and tools for conducting and supporting empirical studies
Last updated by Dou Sun in 2026-03-07
Best Papers
YearBest Papers
2018A Longitudinal Cohort Study on the Retainment of Test-driven Development
2018Relationship Between Geographical Location and Evaluation of Developer Contributions in Github
2018What if a Bug has a Different Origin? Making Sense of Bugs Without an Explicit Bug Introducing Change
2018Automatic Topic Classification of Test Cases Using Text Mining at an Android Smartphone Vendor
2018Prediction of relatedness in stack overflow: deep learning vs. SVM: a reproducibility study
2018Comparing Techniques for Aggregating Interrelated Replications in Software Engineering
2017Quantifying the Transition from Python 2 to 3: An Empirical Study of Python Applications
2017An empirical analysis of FLOSS repositories to compare One-Time Contributors to Core and Periphery Developers
2017REACT: An Approach for Capturing Rationale in Chat Messages
2017Early Phase Cost Models for Agile Software Processes in the US DoD
2016Experiences from Measuring Learning Potential and Performance in Large-Scale Distributed Software Development
2016Using Forward Snowballing to update Systematic Reviews in Software Engineering
2016An External Replication on the Effects of Test-driven Development Using Blind Analysis
2015An Exploratory Study on the Evolution of C Programming in the Unix Operating System
2015How to Make Best Use of Cross-Company Data for Web Effort Estimation?
2015Don't Call Us, We'll Call You: Characterizing Callbacks in JavaScript
2014Evaluating strategies for study selection in systematic literature studies
2014Networking in a Large-Scale Distributed Agile Project
2014Towards a Framework to Support Large Scale Sampling in Software Engineering Surveys
2014Discovering Buffer Overflow Vulnerabilities In The Wild: An Empirical Study
2013Towards a Metric Suite Proposal to Quantify Confirmation Biases of Developers
2013Evaluating software product metrics with synthetic defect data
2013Benchmarking Usability and Performance of Multicore Languages
2012Handling Categorical Variables in Effort Estimation
2012Experimental Assessment of Software Metrics using Automated Refactoring
2011One Technique is Not Enough: A Comparison of Vulnerability Discovery Techniques
2011An Empirical Investigation of Systematic Reviews in Software Engineering
2011End-User Programmers and their Communities: An Artifact-Based Analysis
2011Design of an Empirical Study for Comparing the Usability of Concurrent Programming Languages
2011An Empirical Study on the Use of Team Building Criteria in Software Projects
2011Scrum + Engineering Practices: Experiences of Three Microsoft Teams
2010Transition from a plan-driven process to Scrum: a longitudinal case study on software quality
2010Trust dynamics in global software engineering
2010Are developers complying with the process: an XP study
2009Using differences among replications of software engineering experiments to gain knowledge
2008Socio-technical congruence: a framework for assessing the impact of technical and work dependencies on software development productivity
2007The Effects of Over and Under Sampling on Fault-prone Module Detection
2007Toward Reducing Fault Fix Time: Understanding Developer Behavior for the Design of Automated Fault Detection Tools
Related Conferences
CCFCOREQUALISShortFull NameSubmissionNotificationConference
baa2ESEMInternational Symposium on Empirical Software Engineering and Measurement2026-04-222026-06-302026-10-04
aaa1ASEInternational Conference on Automated Software Engineering2026-03-262026-05-252026-10-12
aa*a1ICSEInternational Conference on Software Engineering2025-07-112025-10-172026-04-12
cbAPSECAsia-Pacific Software Engineering Conference2025-07-062025-09-132025-12-02
cb1SEAAEuromicro Conference on Software Engineering and Advanced Applications2025-05-252025-06-232025-09-10
ccb3TASEInternational Symposium on Theoretical Aspects of Software Engineering2025-03-012025-04-012025-07-14
cb3ICSEAInternational Conference on Software Engineering Advances2024-06-172024-08-042024-09-29
bb1SEFMInternational Conference on Software Engineering and Formal Methods2022-06-202022-08-072022-09-28
aa*a2ESECEuropean Software Engineering Conference2022-03-102022-06-142022-11-14
cSEInternational Conference on Software Engineering2012-09-262012-11-152013-02-11