Información de la conferencia
PACT 2026: International Conference on Parallel Architectures and Compilation Techniques
Por favor Iniciar para ver el sitio web del congreso
Día de Entrega: |
2026-04-17 |
Fecha de Notificación: |
2026-08-05 |
Fecha de Conferencia: |
2026-10-19 |
Ubicación: |
Chicago, Illinois, USA |
Años: |
35 |
CCF: b CORE: a QUALIS: a2 Vistas: 48005 Seguidores: 87 Asistentes: 7
Solicitud de Artículos
Scope
The International Conference on Parallel Architectures and Compilation Techniques (PACT) is a unique technical conference at the intersection of hardware and software, with a special emphasis on parallelism. PACT brings together researchers from computer architectures, compilers, execution environments, programming languages, and applications to present and discuss their latest research results, tools, and practical experiences. This year, PACT is specifically committed to pioneering AI-centric computing, seeking research that redefines the performance, scalability, and efficiency of large-scale AI workloads across diverse parallel and heterogeneous platforms.
PACT 2026 will be held as an in-person event in Chicago, IL. USA. We encourage all authors of accepted papers to participate, and at least one author must attend the conference.
PACT seeks submissions in two categories:
Research Papers
Tools and Practical Experience (TPE) Papers
Topics of Interest
PACT welcomes submissions on topics including, but not limited to:
Parallel architectures, including accelerators for AI and other domains
Conventional parallel architectures (e.g., multicore, multithreaded, superscalar, and VLIW architectures) and heterogeneous architectures
AI accelerators: design of specialized hardware for LLM inference and training (e.g., TPUs, NPUs, and custom silicon)
In-memory & near-data processing: architectures to mitigate the “memory wall” in massive AI model parameters
Heterogeneous systems: integration of CPUs, GPUs, and FPGAs for distributed AI workloads
Scalable AI Infrastructure: Architecture support for multi-node, multi-GPU clusters and high-speed interconnects for LLM scaling
Compilers and tools for parallel architectures
Conventional compilers and tools for parallel and heterogeneous architectures
Dynamic translation and optimization
ML compilers: automated optimization, kernel fusion, and code generation for ML frameworks
LLMs for compilation: using AI to automate parallelization, loop transformations, and autotuning
Dynamic optimization: runtime systems for adaptive AI model execution and sparse computation
Quantization & compression: compiler-assisted techniques for model pruning and low-precision arithmetic
Middleware and runtime system support for parallel computing
Resource management & scheduling
Communication & synchronization
Energy-aware middleware
Quantum-HPC interfacing
Serverless parallel computing (e.g., AWS Lambda)
AI & LLM-specific runtime support, including distributed inference & training, KV cache management, and computation-communication overlap
I/O issues in parallel computing and their application impact
Data loading & preprocessing pipelines
Metadata scalability
Memory-storage convergence
Large-scale data processing for AI models and applications
Hardware and software resilience & fault tolerance
Checkpointing & restart
Silent data corruption detection
Self-healing runtimes
Applications and experimental studies of parallel processing, especially using AI models
Parallel programming languages, algorithms, and applications
Computational models for concurrent execution
Compiler and hardware support for parallel applications
Support for correctness in hardware and software
Reconfigurable parallel computing
Research Papers
Research papers will be evaluated by the PACT Program Committee based on:
Relevance: The paper should align with PACT’s topics of interest.
Novelty/Originality: The work should present new ideas or offer fresh perspectives.
Significance: The research should address an important problem and have the potential to influence future work.
Results: The claims should be well-supported by clear and validated results.
Comparison to Prior Work: The paper should properly discuss existing literature, highlighting similarities, differences, and improvements.
Tools and Practical Experience (TPE) Papers
TPE papers focus on practical applications, industry challenges, and experience reports. A TPE paper must clearly explain its functionality, summarize practical experience with realistic case studies, and describe any supporting artifacts. The title of a TPE paper must include the prefix “TPE:”. TPE papers follow the same submission guidelines and are reviewed by the same Program Committee as research papers.
TPE papers will be evaluated based on:
Originality: They should present PACT-related technologies applied to real-world problems.
Usability: The tool or software should have broad applicability and aid PACT-related research.
Documentation: The tool/software should be well-documented on a public website.
Benchmark Repository: A benchmark suite should be provided for testing.
Availability: Preference is given to tools/software that are freely available, though industry/commercial tools may be considered with justification.
Foundations: The paper should relate to PACT’s principles, though extensive theoretical discussion is not required.
The International Conference on Parallel Architectures and Compilation Techniques (PACT) is a unique technical conference at the intersection of hardware and software, with a special emphasis on parallelism. PACT brings together researchers from computer architectures, compilers, execution environments, programming languages, and applications to present and discuss their latest research results, tools, and practical experiences. This year, PACT is specifically committed to pioneering AI-centric computing, seeking research that redefines the performance, scalability, and efficiency of large-scale AI workloads across diverse parallel and heterogeneous platforms.
PACT 2026 will be held as an in-person event in Chicago, IL. USA. We encourage all authors of accepted papers to participate, and at least one author must attend the conference.
PACT seeks submissions in two categories:
Research Papers
Tools and Practical Experience (TPE) Papers
Topics of Interest
PACT welcomes submissions on topics including, but not limited to:
Parallel architectures, including accelerators for AI and other domains
Conventional parallel architectures (e.g., multicore, multithreaded, superscalar, and VLIW architectures) and heterogeneous architectures
AI accelerators: design of specialized hardware for LLM inference and training (e.g., TPUs, NPUs, and custom silicon)
In-memory & near-data processing: architectures to mitigate the “memory wall” in massive AI model parameters
Heterogeneous systems: integration of CPUs, GPUs, and FPGAs for distributed AI workloads
Scalable AI Infrastructure: Architecture support for multi-node, multi-GPU clusters and high-speed interconnects for LLM scaling
Compilers and tools for parallel architectures
Conventional compilers and tools for parallel and heterogeneous architectures
Dynamic translation and optimization
ML compilers: automated optimization, kernel fusion, and code generation for ML frameworks
LLMs for compilation: using AI to automate parallelization, loop transformations, and autotuning
Dynamic optimization: runtime systems for adaptive AI model execution and sparse computation
Quantization & compression: compiler-assisted techniques for model pruning and low-precision arithmetic
Middleware and runtime system support for parallel computing
Resource management & scheduling
Communication & synchronization
Energy-aware middleware
Quantum-HPC interfacing
Serverless parallel computing (e.g., AWS Lambda)
AI & LLM-specific runtime support, including distributed inference & training, KV cache management, and computation-communication overlap
I/O issues in parallel computing and their application impact
Data loading & preprocessing pipelines
Metadata scalability
Memory-storage convergence
Large-scale data processing for AI models and applications
Hardware and software resilience & fault tolerance
Checkpointing & restart
Silent data corruption detection
Self-healing runtimes
Applications and experimental studies of parallel processing, especially using AI models
Parallel programming languages, algorithms, and applications
Computational models for concurrent execution
Compiler and hardware support for parallel applications
Support for correctness in hardware and software
Reconfigurable parallel computing
Research Papers
Research papers will be evaluated by the PACT Program Committee based on:
Relevance: The paper should align with PACT’s topics of interest.
Novelty/Originality: The work should present new ideas or offer fresh perspectives.
Significance: The research should address an important problem and have the potential to influence future work.
Results: The claims should be well-supported by clear and validated results.
Comparison to Prior Work: The paper should properly discuss existing literature, highlighting similarities, differences, and improvements.
Tools and Practical Experience (TPE) Papers
TPE papers focus on practical applications, industry challenges, and experience reports. A TPE paper must clearly explain its functionality, summarize practical experience with realistic case studies, and describe any supporting artifacts. The title of a TPE paper must include the prefix “TPE:”. TPE papers follow the same submission guidelines and are reviewed by the same Program Committee as research papers.
TPE papers will be evaluated based on:
Originality: They should present PACT-related technologies applied to real-world problems.
Usability: The tool or software should have broad applicability and aid PACT-related research.
Documentation: The tool/software should be well-documented on a public website.
Benchmark Repository: A benchmark suite should be provided for testing.
Availability: Preference is given to tools/software that are freely available, though industry/commercial tools may be considered with justification.
Foundations: The paper should relate to PACT’s principles, though extensive theoretical discussion is not required.
Última Actualización Por Dou Sun en 2026-02-22
Coeficiente de Aceptación
| Año | Enviados | Aceptados | Aceptados(%) |
|---|---|---|---|
| 2005 | 119 | 30 | 25.2% |
| 2004 | 122 | 23 | 18.9% |
| 2003 | 144 | 24 | 16.7% |
| 2002 | 119 | 25 | 21% |
| 2001 | 126 | 26 | 20.6% |
| 2000 | 107 | 29 | 27.1% |
| 1999 | 114 | 35 | 30.7% |
Los Mejores Artículos
| Año | Los Mejores Artículos |
|---|---|
| 2016 | Scalable Task Parallelism for NUMA: A Uniform Abstraction for Coordinated Scheduling and Memory Management |
Conferencias Relacionadas
Revistas Relacionadas
| CCF | Nombre Completo | Factor de Impacto | Editor | ISSN |
|---|---|---|---|---|
| a | IEEE Journal on Selected Areas in Communications | 17.2 | IEEE | 0733-8716 |
| Archives of Computational Methods in Engineering | 12.1 | Springer | 1134-3060 | |
| b | Journal of Parallel and Distributed Computing | 4.0 | Elsevier | 0743-7315 |
| Ethics and Information Technology | 4.0 | Springer | 1388-1957 | |
| b | Parallel Computing | 2.1 | Elsevier | 0167-8191 |
| Journal of Computer Virology and Hacking Techniques | 1.9 | Springer | 2263-8733 | |
| c | Service Oriented Computing and Applications | 1.7 | Springer | 1863-2386 |
| a | ACM Transactions on Architecture and Code Optimization | 1.500 | ACM | 1544-3566 |
| Proceedings of the ACM in Computer Graphics and Interactive Techniques | 1.400 | ACM | 2577-6193 | |
| ACM Transactions on Parallel Computing | 0.900 | ACM | 2329-4949 |