Journal Information
Robot Learning
https://www.elspub.com/journals/rl/home
Publisher:
ELSP
ISSN:
2960-1436
Viewed:
37
Tracked:
0
Call For Papers
Robot Learning is a peer-reviewed open access journal focused on publishing original works in all areas of robot learning in both theoretical research and application achievement.

Topics of interest include but are not limited to the following:

    Learning enhanced perception for robotics
    Learning enhanced robot planning
    Learning enhanced robot manipulation
    Learning enhanced robot control
    Learning method for human-robot coordination
    Learning method for multi-robot cooperation or confrontation
    Learning method for self-driving
    Learning based robotic warehousing
    Learning enhanced intelligent transportation
    Learning method for bionic robotics and medical robotics
    Learning method for UAV and USV
    Deep learning, imitation learning and reinforcement learning for robotic system
    Sim-to-Real transfer for robotic applications
    Dataset, benchmark and simulator for robotic learning
    Applications for robotic learning
Last updated by Dou Sun in 2025-11-28
Special Issues
Special Issue on Human-Robot Interaction and Human-Centered Robotics
Submission Date: 2025-12-31

Robots are becoming an integral part of human society, being deployed across diverse scenarios where humans are present. From households (e.g., kitchens and bedrooms) to public spaces (e.g., airports and banks), and service settings (e.g., hospitals and elder-care facilities), robotics technology is increasingly being integrated into environments shaped by human needs. This trend is driving both academia and industry to push the boundaries of robotic technologies into challenging and dynamic working environments. This growing presence of robots in human-centric settings introduces exciting opportunities but also significant challenges. Ensuring safe and effective human-robot collaboration is paramount. For robots to coexist with humans in these environments, they must operate safely, interact naturally, and accomplish tasks efficiently, even in the presence of human disturbance or assistance. This special issue aims to collect high-quality research contributions addressing the challenges and opportunities in human-robot interaction (HRI) and human-centered robotics. We invite submissions that focus on theoretical advancements, novel algorithms, practical applications in relevant domains. Topics of interest include, but are not limited to: Learning algorithms: Machine learning approaches, including imitation learning and reinforcement learning, specifically designed for human-centric tasks; Reactive and predictive control: Advanced control strategies that can deal with unpredictable or dynamic human behaviors; Multi-modal perception: Techniques for robots to interpret multi-modal sensory data (e.g., visual, auditory, tactile) using techniques like generative or foundation models; Safety: Ensuring the safety of both humans and robots during interactions in shared environments; Healthcare services: Assistive robot systems developed for medical settings, eldercare, and rehabilitation applications; Physical and/or remote interaction: Robots engaging with humans through physical forces or remotely via visual, auditory, or verbal communication; Human intention understanding: Inferring human goals, emotions, and intentions to enable seamless collaboration and effective interaction; Review and tutorial papers: Comprehensive reviews or tutorials discussing key topics in human-robot interaction and human-centered robotics.
Last updated by Dou Sun in 2025-11-28
Special Issue on Human-in-the-Loop Robot Learning in the Era of Foundation Models: Challenges and Opportunities
Submission Date: 2025-12-31

This interdisciplinary special issue focuses on the latest advancements in human-in-the-loop robot learning, which integrates multi-modal human input (e.g., natural language, gestures, and haptic interaction) and online feedback (e.g., rewards, corrections, and preferences) to enhance robot performance, adaptability, and alignment with human intentions. Recent breakthroughs in foundation models, such as Large Language Models (LLMs), Vision-Language Models (VLMs), and Vision-Language-Action Models (VLAs), have provided robots with unprecedented perception and reasoning capabilities. However, effectively integrating these models into robotic systems remains an emerging and underexplored challenge. This special issue aims to gather high-quality research contributions that address the challenges and opportunities in synergistically combining foundation models and human-in-the-loop learning to advance robot learning through active and intuitive human participation. We invite submissions that explore the following critical themes: (1) leveraging foundation models for adaptive and generalizable robot learning in complex dynamic environments, (2) incorporating real-time human feedback to refine learning processes, and (3) designing frameworks for safe and trustworthy human-robot interaction. Topics of interest include, but are not limited to: Human-AI collaboration for robot learning Human-AI hybrid intelligence Foundation models for robotics Transfer learning and fine-tuning of foundation models for robotic applications Knowledge representation and reasoning in robots Human feedback in robot learning Human-in-the-loop reinforcement learning Learning from demonstrations and corrections Interactive robot learning Multi-task robot learning Architectures and frameworks for human-in-the-loop learning Cognitive models for robot learning Adaptive human-robot interaction Safety and robustness in human-robot collaboration We welcome original research articles, reviews, and case studies that contribute to the theoretical, algorithmic, and practical aspects of human-in-the-loop robot learning. Submissions should highlight novel methodologies, experimental validations, and real-world applications that advance the state of the art in this rapidly evolving field. Once a manuscript is received for this special issue, it will proceed directly to the review process.
Last updated by Dou Sun in 2025-11-28
Special Issue on Learning Based Robot Path and Task Planning
Submission Date: 2026-01-20

The path/task planning problem is one of the most fundamental problems in robotics. The path planning problem requires generating the shortest path for the robot from a given starting point to a target point while satisfying the spatial constraints. There are multiple factors to consider in path planning for robotics, starting from understanding task requirements, modeling the robot's dynamics and environment, and defining the target and possible constraints. It is also necessary to consider collision detection for the planned trajectory/path. The task planning problem requires finding a sequence of primitive motion commands for solving a given task. On each robot, a task planner automatically converts the robot's world model and skill definitions into a planning problem which is then solved to find a sequence of actions that the robot should perform to complete its mission. There are existing many traditional path/task planning methods developed for robotics. Recent advances in AI have increasingly impacts on robotic research in path/task planning. For example, Large language models (LLMs) have been used to augment robotic path/task planning with traditional methods like A* and reinforcement learning. As the real world is mostly uncertain and dynamic, robotic path/task planning needs to be adaptive to uncertainty and changes in the real world. This is important in safety-critical applications, e.g., robots operating in our living environments and field robots like underwater robots operating in hazardous environments. The aim of this research topic is to cover recent advances and trends in path/task planning for robotics. Areas to be covered in this research topic could include but not limited to: Deep reinforcement learning for robotic path and task planning in simulation and on real robot platforms LLM augmented robotic path and task planning Robotic path/task planning with adaptive world models Human-centered reinforcement learning, imitation learning, learning from demonstration, learning from observation for robotic path and task planning Deep learning approaches for robotic motion planning Safe reinforcement learning for robotic path and task planning Learning based task planning for multi-robots Other related topics
Last updated by Dou Sun in 2025-11-28
Related Journals
CCFFull NameImpact FactorPublisherISSN
Robot LearningELSP2960-1436
bMachine Learning4.300Springer0885-6125
International Journal of Robotics Research5.0SAGE0278-3649
Electronic Journal of e-LearningAcademic Publishing Limited1479-4403
Journal of Computer Assisted Learning5.100Wiley-Blackwell0266-4909
Robotica2.700Cambridge University Press0263-5747
ACM Transactions on Probabilistic Machine LearningACM0000-0000
bIEEE Transactions on Robotics10.5IEEE1552-3098
Journal of Robotics1.400Hindawi1687-9600
RoboticsMDPI2218-6581
Full NameImpact FactorPublisher
Robot LearningELSP
Machine Learning4.300Springer
International Journal of Robotics Research5.0SAGE
Electronic Journal of e-LearningAcademic Publishing Limited
Journal of Computer Assisted Learning5.100Wiley-Blackwell
Robotica2.700Cambridge University Press
ACM Transactions on Probabilistic Machine LearningACM
IEEE Transactions on Robotics10.5IEEE
Journal of Robotics1.400Hindawi
RoboticsMDPI