会議情報
CTML 2026: International Conference on Computational Theory and Machine Learning
会議のウェブサイトを表示するにはログインしてください

提出日:
2026-06-30
通知日:
2026-07-30
会議日:
2026-11-27
場所:
Rio de Janeiro, Brazil
閲覧: 114   追跡: 0   出席: 0

論文募集
2026 International Conference on Computational Theory and Machine Learning (CTML2026) will bring together leading researchers, engineers and scientists in the domain of interest from around the world.

The topics of interest for submission include, but are not limited to:

Track I. Foundational Computational Theories in Machine Learning

• Theoretical Foundations and Boundary Derivation of Unsupervised/Weakly Supervised Learning
• Bayesian Learning, Variational Inference, and Optimization of Monte Carlo Methods
• Statistical Inference for High-Dimensional Data, Sparse Estimation, and Dimensionality Reduction Theory
• Graphical Models, Probabilistic Inference, and Computational Theory of Latent Variable Models
• Extensions of the PAC Learning Framework and Refinement of Generalization Bounds
• Optimization and Applications of VC Dimension and Rademacher Complexity Theory
• Quantitative Analysis of Sample Complexity, Time Complexity, and Space Complexity
• Theoretical Breakthroughs in Online Learning, Distributed Learning, and Adaptive Learning
• Statistical Theoretical Foundations for Few-Shot, Zero-Shot, and Meta-Learning
• Convergence Analysis of Convex Optimization, Non-Convex Optimization, and Ill-Posed Problems
• Stochastic Optimization, Distributed Optimization, and Asynchronous Optimization Algorithms
• Innovations in Sparse Optimization, Low-Rank Learning, and Regularization Theory
• Gradient-Based Algorithms, First-Order/Second-Order Optimization, and Constrained Optimization
• Game Theory and Equilibrium Computation in Multi-Agent Learning
• Stochastic Games, Bandit Algorithms, and Sequential Decision Theory
• Decision Boundaries, Value Function Approximation, and Dynamic Programming in Reinforcement Learning
• Theoretical Mechanisms of Adversarial Learning and Robust Decision Optimization

Track II. Advanced Machine Learning Models and Computational Architectures

• Foundational Theories of Large Language Models (LLMs), Scaling Laws, and Sparse Computation
• Convergence, Controllable Generation, and Alignment Theory for Generative Models
• Unified Representation and Cross-Modal Computation in Large Multimodal Models
• Efficient Inference, Pre-Training, and Fine-Tuning Computation Optimization for Large Models
• Theoretical Innovations in Generative Architectures: Diffusion Models, GANs, VAEs, etc.
• Theoretical Analysis of Architectures: Transformers, CNNs, GNNs, etc.
• Dimensionality Reduction in Self-Supervised Learning, Contrastive Learning, and Representation Learning
• Generalization, Overfitting, and Catastrophic Forgetting Theory in Deep Models
• Neural Architecture Search and Computational Theory of Dynamic Networks
• Distributed Training Theory: Data/Model Parallelism and Communication Optimization
• Complexity of Parallel Algorithms, Fault-Tolerance Mechanisms, and Cluster Scheduling
• Federated Learning Computational Frameworks: Heterogeneous Collaboration and Aggregation Algorithms
• Decentralized Machine Learning and Edge Distributed Computing
• Theories of Model Compression, Quantization, Pruning, and Knowledge Distillation
• Edge Inference Optimization, Low-Power Computation, and Latency Control
• Cloud-Edge Collaborative Computing, Incremental Updating, and Online Adaptation
• Tiny Neural Networks and Embedded Machine Learning Computation

Track III. Trustworthy, Secure, and Explainable Machine Learning Computation

• Model Interpretability Metrics, Attribution Algorithms, and Causal Inference
• Theoretical Decomposition, Decision Path Visualization, and Verification of Black-Box Models
• Causal Machine Learning, Counterfactual Reasoning, and Explainable Representations
• Post-Hoc Explanation, Intrinsically Interpretable Models, and Logical Reasoning
• Differential Privacy Theory, Noise Mechanisms, and Utility Trade-Offs
• Homomorphic Encryption, Secure Multi-Party Computation, and Privacy Aggregation
• Federated Privacy Computing, Data Masking, and Anonymization Theory
• Defense Against Membership Inference Attacks and Data Leakage Prevention
• Adversarial Robustness Theory, Perturbation Defenses, and Distribution Shift Adaptation
• Algorithmic Fairness Metrics, Bias Mitigation, and Group Equilibrium
• Anomaly Detection, Robustness to Noisy Data, and Out-of-Distribution Generalization
• Fairness-Constrained Optimization and Robust Decision System Design
• Adversarial Example Generation, Attack-Defense Games, and Robustness Verification
• Defenses Against Model Poisoning, Data Poisoning, and Traceability Mechanisms
• Model Intellectual Property Protection, Piracy Detection, and Watermarking Techniques
• Malicious AI Detection, Backdoor Attacks, and Defensive Computation

Track IV. Cross-Domain Computation and Interdisciplinary Innovations in Machine Learning

• Physics-Informed Neural Networks and Machine Learning Solutions for Differential Equations
• Scientific Data Fitting, Numerical Simulation, and Inverse Problem Computation
• Quantum Machine Learning, Quantum Algorithms, and Integration with Quantum Computing
• Bioinformatics, Computational Chemistry, and Machine Learning Applications in Materials Science
• Hierarchical Reinforcement Learning, Offline Reinforcement Learning, and Off-Policy Evaluation
• Continuous Space Decision-Making, Stochastic Control, and Dynamic System Optimization
• Multi-Agent Collaboration, Game-Theoretic Control, and Distributed Decision-Making
• Robot Learning, Adaptive Control, and Real-Time Decision Optimization
• Large-Scale Graph Learning, Network Representation, and Graph Neural Network Computation
• Time Series Reasoning, Streaming Data Mining, and Dynamic Pattern Recognition
• Knowledge Graphs, Logical Reasoning, and Integration with Machine Learning
• Anomaly Pattern Mining, Incremental Learning, and Lifelong Learning Computation
• Neuro-Symbolic Computation, Brain-Inspired Machine Learning, and Neuromorphic Computing
• Chaos Theory, Complex Systems, and Coupling with Machine Learning
• Non-Von Neumann Architectures, In-Memory Computing, and Machine Learning Adaptation
• Photonic Computing, Biological Computing, and Cross-Disciplinary Integration with Machine Learning

Track V. Machine Learning Systems and Engineering Computation

• Automated Machine Learning (AutoML), Hyperparameter Optimization, and Neural Architecture Search
• ML Pipeline Scheduling, Resource Allocation, and Load Balancing Optimization
• Model Deployment, Serving, and Dynamic Scaling Computational Theory
• Automated Feature Engineering, Data Preprocessing, and Computational Cost Control
• Streaming Data, Real-Time Data Learning Computation, and Incremental Updating
• High-Dimensional Data Reduction, Imbalanced Data Processing, and Sample Selection
• Data Quality, Noise Cleaning, and Trade-Offs in Computational Accuracy
• Distributed Storage, Indexing, and Computational Coordination for Massive Data
• Hardware Adaptation and Operator Optimization for GPU/TPU/NPU/ASIC
• Co-Design of Compute-In-Memory, Near-Memory Computing, and Machine Learning Models
• Chip-Level Co-Optimization, Low-Latency Inference, and Hardware Deployment
• Heterogeneous Hardware Clusters, Compute Scheduling, and Energy Efficiency Optimization 
最終更新 Dou Sun 2026-05-09