会議情報
ICMI 2026: International Conference on Multimodal Interaction
会議のウェブサイトを表示するにはログインしてください

提出日:
2026-04-13
通知日:
2026-07-01
会議日:
2026-10-05
場所:
Napoli, Italy
年:
28
CCF: c   CORE: b   QUALIS: a2   閲覧: 74385   追跡: 46   出席: 14

論文募集
The 28th International Conference on Multimodal Interaction (ICMI 2026) will be held in Napoli, Italy. ICMI is the premier international forum for advancing research at the intersection of multimodal artificial intelligence (AI) and social interaction to create technically innovative, effective, and human-centered multimodal interactive systems. A unique aspect of ICMI is its multidisciplinary nature, bringing together research in AI, multimodal data processing, and human-human interaction to bridge behavioral understanding with technology, posing an eye towards impactful applications that benefit people and society.

Novelty will be evaluated along two dimensions: scientific novelty and technical novelty. Accepted papers at ICMI 2026 must demonstrate novelty in at least one of these two dimensions.

The theme of this year’s conference is “Context and Cultural Awareness for Multimodal Interaction”, to explore how context and cultural factors influence multimodal interaction systems, including their design, implementation, and evaluation. We welcome papers that address the integration of contextual understanding, such as environmental, social, and emotional factors, into multimodal interaction systems. We also encourage contributions that explore cultural considerations in the development and deployment of interactive technologies.

Topics of interest include, but are not limited to:

Affective computing and interaction
User-adaptive systems
Cognitive modelling and multimodal interaction
Context-aware modelling
Cross-cultural design and evaluation
Gesture, touch, and haptics
Healthcare, assistive technologies
Human communication dynamics
Human-robot/agent multimodal interaction
Human-centred AI and ethics
Interaction with a smart environment
Machine learning for multimodal interaction
Mobile and wearable multimodal systems
Multimodal behaviour generation
Multimodal datasets and validation
Multimodal dialogue modeling
Multimodal fusion and representation
Multimodal interactive applications
Novel multimodal datasets
Spoken/visual behaviours in social interaction
System components and multimodal platforms
Virtual/augmented reality and multimodal interaction

Commitment to ethical conduct is mandatory, and submissions must adhere to ethical standards, in particular when human-derived data are employed. Authors are encouraged to consult the ACM Code of Ethics and Professional Conduct (https://ethics.acm.org/).
最終更新 Dou Sun 2026-03-11
ベスト ペーパー
時間ベスト ペーパー
2025Speech-to-Joy: Self-Supervised Features for Enjoyment Prediction in Human–Robot Conversation
2025SpikEy: Preventing Drink Spiking using Wearables
2025Exploring the effects of force feedback on VR Keyboards with varying visual designs
2023EEG-based Cognitive Load Classification using Feature Masked Autoencoding and Emotion Transfer Learning
2023AQ-GT: a Temporally Aligned and Quantized GRU-Transformer for Co-Speech Gesture Synthesis
2021A Multimodal Dataset and Evaluation for Feature Estimators of Temporal Phases of Anxiety
2021Exploiting the Interplay between Social and Task Dimensions of Cohesion to Predict its Dynamics Leveraging Social Sciences
2021Bi-Bimodal Modality Fusion for Correlation-Controlled Multimodal Sentiment Analysis
2020SmellControl: The Study of Sense of Agency in Smell
2020A Neural Architecture for Detecting User Confusion in Eye-tracking Data
2019The Dyslexperience: Use of Projection Mapping to Simulate Dyslexia
2019Unintended Bias in Misogyny Detection
2019Multimodal Analysis and Estimation of Intimate Self-Disclosure
2019Modeling Team-level Multimodal Dynamics during Multiparty Collaboration
2016Help Me if You Can: Towards Multiadaptive Interaction Platforms
2016Trust Me: Multimodal Signals of Trustworthiness
2016Adaptive Review for Mobile MOOC Learning via Implicit Physiological Signal Sensing
2016Visuotactile Integration for Depth Perception in Augmented Reality
2016Automatic Recognition of Self-reported and Perceived Emotion: Does Joint Modeling Help?
関連会議
CCFCOREQUALIS省略名完全な名前提出日通知日会議日
cba2ICMIInternational Conference on Multimodal Interaction2026-04-132026-07-012026-10-05
b3INDINInternational Conference on Industrial Informatics2026-02-282026-04-152026-07-26
b3MMSysACM Multimedia Systems Conference2025-11-142026-01-092026-04-04
cRVInternational Conference on Runtime Verification2025-05-302025-07-112025-09-15
ca2SMIShape Modeling International2025-04-142025-07-142025-10-29
cab1INTERACTInternational Conference on Human-Computer Interaction2025-02-172025-04-282025-09-08
a2HRIInternational Conference on Human-Robot Interaction2024-09-232024-12-022025-03-04
cACHIInternational Conference on Advances in Computer-Human Interactions2023-02-012023-02-282023-04-24
b2HCIIInternational Conference on Human-Computer Interaction2015-11-062015-12-042016-07-17
cGI'IEEE Global Internet Symposium2012-12-232013-01-242013-04-19
関連仕訳帳
CCF完全な名前インパクト ・ ファクター出版社ISSN
cIEEE Transactions on Industrial Informatics11.7IEEE1551-3203
aACM Transactions on Computer-Human Interaction6.6ACM1073-0516
Ceramics International5.6Elsevier0272-8842
ACM Transactions on Human-Robot Interaction5.5ACM2573-9522
bInternational Journal of Human-Computer Interaction4.9Taylor & Francis1044-7318
cJournal of Biomedical Informatics4.5Elsevier1532-0464
cMultimedia Systems3.1Springer0942-4962
International Journal of Multimedia Information Retrieval2.9Springer2192-6611
bACM Transactions on Applied Perception2.1ACM1544-3558
Interaction Studies0.900John Benjamins Publishing Company1572-0373