Journal Information
Robot Learning
https://www.elspub.com/journals/rl/homePublisher: |
ELSP |
ISSN: |
2960-1436 |
Viewed: |
1365 |
Tracked: |
0 |
Call For Papers
Robot Learning is a peer-reviewed open access journal focused on publishing original works in all areas of robot learning in both theoretical research and application achievement.
Topics of interest include but are not limited to the following:
Learning enhanced perception for robotics
Learning enhanced robot planning
Learning enhanced robot manipulation
Learning enhanced robot control
Learning method for human-robot coordination
Learning method for multi-robot cooperation or confrontation
Learning method for self-driving
Learning based robotic warehousing
Learning enhanced intelligent transportation
Learning method for bionic robotics and medical robotics
Learning method for UAV and USV
Deep learning, imitation learning and reinforcement learning for robotic system
Sim-to-Real transfer for robotic applications
Dataset, benchmark and simulator for robotic learning
Applications for robotic learning
Last updated by Dou Sun in 2025-11-28
Special Issues
Special Issue on Learning Based Robot Path and Task PlanningSubmission Date: 2026-01-20The path/task planning problem is one of the most fundamental problems in robotics.
The path planning problem requires generating the shortest path for the robot from a given starting point to a target point while satisfying the spatial constraints. There are multiple factors to consider in path planning for robotics, starting from understanding task requirements, modeling the robot's dynamics and environment, and defining the target and possible constraints. It is also necessary to consider collision detection for the planned trajectory/path. The task planning problem requires finding a sequence of primitive motion commands for solving a given task. On each robot, a task planner automatically converts the robot's world model and skill definitions into a planning problem which is then solved to find a sequence of actions that the robot should perform to complete its mission.
There are existing many traditional path/task planning methods developed for robotics. Recent advances in AI have increasingly impacts on robotic research in path/task planning. For example, Large language models (LLMs) have been used to augment robotic path/task planning with traditional methods like A* and reinforcement learning. As the real world is mostly uncertain and dynamic, robotic path/task planning needs to be adaptive to uncertainty and changes in the real world. This is important in safety-critical applications, e.g., robots operating in our living environments and field robots like underwater robots operating in hazardous environments. The aim of this research topic is to cover recent advances and trends in path/task planning for robotics. Areas to be covered in this research topic could include but not limited to:
Deep reinforcement learning for robotic path and task planning in simulation and on real robot platforms
LLM augmented robotic path and task planning
Robotic path/task planning with adaptive world models
Human-centered reinforcement learning, imitation learning, learning from demonstration, learning from observation for robotic path and task planning
Deep learning approaches for robotic motion planning
Safe reinforcement learning for robotic path and task planning
Learning based task planning for multi-robots
Other related topicsLast updated by Dou Sun in 2025-11-28
Special Issue on Intelligent Vision-Driven RoboticsSubmission Date: 2026-07-31Special Issue Editors
Dr. Peng Zhou, Great Bay University, China
Prof. David Navarro-Alarcon, The Hong Kong Polytechnic University, China
About This Special Issue
Aims and Motivation
Robotics is converging on an intelligent, vision-driven paradigm where precise geometry, robust control, and data-driven adaptation co-exist and reinforce one another. Prof. Liu’s oeuvre exemplifies this synthesis—from early uncalibrated visual servoing and grasp theory to modern soft/surgical autonomy and large-scale SLAM—providing a unifying backbone for next-generation robots that are safe, dexterous, and reliable in unstructured, visually complex environments.
This invitation-only Special Issue honors Prof. Yunhui Liu’s enduring impact on intelligent, vision-driven robotics. His work—spanning uncalibrated visual servoing, grasping and fixturing theory, motion planning, soft and continuum manipulation, surgical robotics, large-scale SLAM and 3D vision, networked teleoperation, and learning-enabled autonomy—has consistently connected rigorous theory with deployable, closed-loop systems, closing the loop between sensing and action in real-world environments. The collection is authored by Prof. Liu’s friends, former students, and close collaborators, and reflects his profound influence on vision-centered robotic intelligence.
Submission Policy
Invitation-first. This Special Issue is primarily invitation-based; invited manuscripts will be reviewed on a rolling basis.
Inquiries welcome. If you haven’t received an invitation but believe your work is a strong fit for vision-driven robotics, you’re welcome to email the Guest Editors with a brief summary. Depending on space and scope, we may be able to extend additional invitations.
Authors are encouraged to add a brief note on the relationship between your submission and Prof. Liu's academic work—e.g., which ideas, methods, or perspectives served as motivation or inspiration—consistent with the article type.
Scope and Themes
We welcome contributions that tightly integrate visual perception (including 3D geometry) with control and learning to achieve robust, generalizable, and deployable autonomy.
Visual servoing and perception-driven control: Uncalibrated/model-free schemes; eye-in-hand and fixed-camera control; observers without visual velocity; nonholonomic/mobile visual control; task-oriented and invariant visual features.
Grasping, fixturing, and dexterous manipulation:Vision- and tactility-informed grasp analysis and fixture design; multimodal sensing; soft/variable-stiffness hands; compliant/origami-inspired grippers; in-hand and textile/cable manipulation with visual feedback; geometry-aware policies.
Deformable, soft, and continuum robots: Visual/FBG-based shape sensing and reconstruction; deformation and shape servoing; constrained-environment modeling and control; hybrid model–data methods for perception–control fusion.
Surgical robotics and medical applications: Vision-centric autonomy in MRI/OR-integrated systems; autonomous endoscopic view control; instrument/tissue perception and 3D reconstruction (stereo/NeRF/Gaussian splatting); integrated perception–planning–control for safe task autonomy.
SLAM, 3D vision, and geometric learning: Point/line/vanishing-point geometry; LiDAR/visual–inertial/edge-based SLAM; transparent/reflective/medical surface reconstruction; calibration and metrology; neural and geometric scene representations for control.
Networked and human-in-the-loop robotics: Internet-based teleoperation with haptics and QoS; cooperative teleoperation; AR/gaze-based interaction; shared autonomy with intent inference; distributed estimation and coordination for multi-robot systems.
Learning for vision-driven autonomy: Self-/weakly supervised visual representations for video and 3D; RL and imitation for manipulation, surgery, and locomotion with visual feedback; sim-to-real transfer; transformer/graph models coupling perception with planning and control; grounding policies in geometric priors.
Field and industrial robotics: Vision-centric construction and finishing; warehouse fleets and swarm logistics; autonomous forklifts/AGVs and tractor–trailer control; robust bin picking and assembly with multi-view/active perception; long-horizon, closed-loop deployments.
Article Types
Original research articles with strong theoretical and experimental validation (bench-top to clinical/field), emphasizing vision-in-the-loop autonomy.
System and integration papers demonstrating deployable, vision-driven, closed-loop performance in real applications.
Survey/tutorial papers synthesizing state of the art at the intersection of vision, learning, and control, with clear roadmaps for future research.
Benchmark/dataset papers that enable reproducibility and accelerate vision-based robotics, including protocols, metrics, code, and models.
Intended Audience
Researchers and practitioners in robotics, computer vision, control, and AI/ML for robotics; surgical/medical robotics; industrial and field automation; and human–robot interaction and teleoperation.
Dedication
It is an unforgettable memory and a great pleasure for many of us to have collaborated with Prof. Yunhui Liu—and, for some, to have worked under his mentorship. In deepest respect for his strong and inquiring mind, his enthusiasm for scientific inquiry, and his passion for education, we dedicate this Special Issue to him.Last updated by Dou Sun in 2025-12-18
Related Journals
| CCF | Full Name | Impact Factor | Publisher | ISSN |
|---|---|---|---|---|
| Smart Learning Environments | 12.1 | Springer | 2196-7091 | |
| b | IEEE Transactions on Robotics | 10.5 | IEEE | 1552-3098 |
| IEEE Robotics & Automation Magazine | 7.2 | IEEE | 1070-9932 | |
| BioData Mining | 6.1 | Springer | 1756-0381 | |
| ACM Transactions on Human-Robot Interaction | 5.5 | ACM | 2573-9522 | |
| International Journal of Robotics Research | 5.0 | SAGE | 0278-3649 | |
| Journal of Computer Assisted Learning | 4.6 | Wiley-Blackwell | 0266-4909 | |
| b | Machine Learning | 2.9 | Springer | 0885-6125 |
| Robotica | 2.7 | Cambridge University Press | 0263-5747 | |
| Journal of Robotics | 1.400 | Hindawi | 1687-9600 |
Related Conferences
| CCF | CORE | QUALIS | Short | Full Name | Submission | Notification | Conference |
|---|---|---|---|---|---|---|---|
| b | a* | a2 | COLT | Annual Conference on Learning Theory | 2026-02-04 | 2026-05-04 | 2026-06-29 |
| a | a* | a1 | ICML | International Conference on Machine Learning | 2026-01-23 | 2026-07-06 | |
| c | b | b1 | ALT | International Conference on Algorithmic Learning Theory | 2025-10-02 | 2025-12-18 | 2026-02-23 |
| b3 | ICWL | International Conference on Web-based Learning | 2025-09-30 | 2025-10-20 | 2025-11-30 | ||
| b | b | a1 | ICRA | International Conference on Robotics and Automation | 2025-09-15 | 2026-06-01 | |
| b | a* | b1 | WSDM | International Conference on Web Search and Data Mining | 2025-08-07 | 2025-10-23 | 2026-02-22 |
| a2 | HRI | International Conference on Human-Robot Interaction | 2024-09-23 | 2024-12-02 | 2025-03-04 | ||
| c | b | ACML | Asian Conference on Machine Learning | 2024-06-26 | 2024-09-04 | 2024-12-05 | |
| b4 | ICBL | International Conference on Blended Learning | 2017-02-28 | 2017-03-15 | 2017-06-27 | ||
| a | ISR | International Symposium on Robotics | 2018-06-20 |