仕訳帳情報
Robot Learning
https://www.elspub.com/journals/rl/home
出版社:
ELSP
ISSN:
2960-1436
閲覧:
1366
追跡:
0
論文募集
Robot Learning is a peer-reviewed open access journal focused on publishing original works in all areas of robot learning in both theoretical research and application achievement.

Topics of interest include but are not limited to the following:

    Learning enhanced perception for robotics
    Learning enhanced robot planning
    Learning enhanced robot manipulation
    Learning enhanced robot control
    Learning method for human-robot coordination
    Learning method for multi-robot cooperation or confrontation
    Learning method for self-driving
    Learning based robotic warehousing
    Learning enhanced intelligent transportation
    Learning method for bionic robotics and medical robotics
    Learning method for UAV and USV
    Deep learning, imitation learning and reinforcement learning for robotic system
    Sim-to-Real transfer for robotic applications
    Dataset, benchmark and simulator for robotic learning
    Applications for robotic learning
最終更新 Dou Sun 2025-11-28
Special Issues
Special Issue on Learning Based Robot Path and Task Planning
提出日: 2026-01-20

The path/task planning problem is one of the most fundamental problems in robotics. The path planning problem requires generating the shortest path for the robot from a given starting point to a target point while satisfying the spatial constraints. There are multiple factors to consider in path planning for robotics, starting from understanding task requirements, modeling the robot's dynamics and environment, and defining the target and possible constraints. It is also necessary to consider collision detection for the planned trajectory/path. The task planning problem requires finding a sequence of primitive motion commands for solving a given task. On each robot, a task planner automatically converts the robot's world model and skill definitions into a planning problem which is then solved to find a sequence of actions that the robot should perform to complete its mission. There are existing many traditional path/task planning methods developed for robotics. Recent advances in AI have increasingly impacts on robotic research in path/task planning. For example, Large language models (LLMs) have been used to augment robotic path/task planning with traditional methods like A* and reinforcement learning. As the real world is mostly uncertain and dynamic, robotic path/task planning needs to be adaptive to uncertainty and changes in the real world. This is important in safety-critical applications, e.g., robots operating in our living environments and field robots like underwater robots operating in hazardous environments. The aim of this research topic is to cover recent advances and trends in path/task planning for robotics. Areas to be covered in this research topic could include but not limited to: Deep reinforcement learning for robotic path and task planning in simulation and on real robot platforms LLM augmented robotic path and task planning Robotic path/task planning with adaptive world models Human-centered reinforcement learning, imitation learning, learning from demonstration, learning from observation for robotic path and task planning Deep learning approaches for robotic motion planning Safe reinforcement learning for robotic path and task planning Learning based task planning for multi-robots Other related topics
最終更新 Dou Sun 2025-11-28
Special Issue on Intelligent Vision-Driven Robotics
提出日: 2026-07-31

Special Issue Editors Dr. Peng Zhou, Great Bay University, China Prof. David Navarro-Alarcon, The Hong Kong Polytechnic University, China About This Special Issue Aims and Motivation Robotics is converging on an intelligent, vision-driven paradigm where precise geometry, robust control, and data-driven adaptation co-exist and reinforce one another. Prof. Liu’s oeuvre exemplifies this synthesis—from early uncalibrated visual servoing and grasp theory to modern soft/surgical autonomy and large-scale SLAM—providing a unifying backbone for next-generation robots that are safe, dexterous, and reliable in unstructured, visually complex environments. This invitation-only Special Issue honors Prof. Yunhui Liu’s enduring impact on intelligent, vision-driven robotics. His work—spanning uncalibrated visual servoing, grasping and fixturing theory, motion planning, soft and continuum manipulation, surgical robotics, large-scale SLAM and 3D vision, networked teleoperation, and learning-enabled autonomy—has consistently connected rigorous theory with deployable, closed-loop systems, closing the loop between sensing and action in real-world environments. The collection is authored by Prof. Liu’s friends, former students, and close collaborators, and reflects his profound influence on vision-centered robotic intelligence. Submission Policy Invitation-first. This Special Issue is primarily invitation-based; invited manuscripts will be reviewed on a rolling basis. Inquiries welcome. If you haven’t received an invitation but believe your work is a strong fit for vision-driven robotics, you’re welcome to email the Guest Editors with a brief summary. Depending on space and scope, we may be able to extend additional invitations. Authors are encouraged to add a brief note on the relationship between your submission and Prof. Liu's academic work—e.g., which ideas, methods, or perspectives served as motivation or inspiration—consistent with the article type. Scope and Themes We welcome contributions that tightly integrate visual perception (including 3D geometry) with control and learning to achieve robust, generalizable, and deployable autonomy. Visual servoing and perception-driven control: Uncalibrated/model-free schemes; eye-in-hand and fixed-camera control; observers without visual velocity; nonholonomic/mobile visual control; task-oriented and invariant visual features. Grasping, fixturing, and dexterous manipulation:Vision- and tactility-informed grasp analysis and fixture design; multimodal sensing; soft/variable-stiffness hands; compliant/origami-inspired grippers; in-hand and textile/cable manipulation with visual feedback; geometry-aware policies. Deformable, soft, and continuum robots: Visual/FBG-based shape sensing and reconstruction; deformation and shape servoing; constrained-environment modeling and control; hybrid model–data methods for perception–control fusion. Surgical robotics and medical applications: Vision-centric autonomy in MRI/OR-integrated systems; autonomous endoscopic view control; instrument/tissue perception and 3D reconstruction (stereo/NeRF/Gaussian splatting); integrated perception–planning–control for safe task autonomy. SLAM, 3D vision, and geometric learning: Point/line/vanishing-point geometry; LiDAR/visual–inertial/edge-based SLAM; transparent/reflective/medical surface reconstruction; calibration and metrology; neural and geometric scene representations for control. Networked and human-in-the-loop robotics: Internet-based teleoperation with haptics and QoS; cooperative teleoperation; AR/gaze-based interaction; shared autonomy with intent inference; distributed estimation and coordination for multi-robot systems. Learning for vision-driven autonomy: Self-/weakly supervised visual representations for video and 3D; RL and imitation for manipulation, surgery, and locomotion with visual feedback; sim-to-real transfer; transformer/graph models coupling perception with planning and control; grounding policies in geometric priors. Field and industrial robotics: Vision-centric construction and finishing; warehouse fleets and swarm logistics; autonomous forklifts/AGVs and tractor–trailer control; robust bin picking and assembly with multi-view/active perception; long-horizon, closed-loop deployments. Article Types Original research articles with strong theoretical and experimental validation (bench-top to clinical/field), emphasizing vision-in-the-loop autonomy. System and integration papers demonstrating deployable, vision-driven, closed-loop performance in real applications. Survey/tutorial papers synthesizing state of the art at the intersection of vision, learning, and control, with clear roadmaps for future research. Benchmark/dataset papers that enable reproducibility and accelerate vision-based robotics, including protocols, metrics, code, and models. Intended Audience Researchers and practitioners in robotics, computer vision, control, and AI/ML for robotics; surgical/medical robotics; industrial and field automation; and human–robot interaction and teleoperation. Dedication It is an unforgettable memory and a great pleasure for many of us to have collaborated with Prof. Yunhui Liu—and, for some, to have worked under his mentorship. In deepest respect for his strong and inquiring mind, his enthusiasm for scientific inquiry, and his passion for education, we dedicate this Special Issue to him.
最終更新 Dou Sun 2025-12-18
関連仕訳帳
CCF完全な名前インパクト ・ ファクター出版社ISSN
Smart Learning Environments12.1Springer2196-7091
bIEEE Transactions on Robotics10.5IEEE1552-3098
IEEE Robotics & Automation Magazine7.2IEEE1070-9932
BioData Mining6.1Springer1756-0381
ACM Transactions on Human-Robot Interaction5.5ACM2573-9522
International Journal of Robotics Research5.0SAGE0278-3649
Journal of Computer Assisted Learning4.6Wiley-Blackwell0266-4909
bMachine Learning2.9Springer0885-6125
Robotica2.7Cambridge University Press0263-5747
Journal of Robotics1.400Hindawi1687-9600
関連会議
CCFCOREQUALIS省略名完全な名前提出日通知日会議日
ba*a2COLTAnnual Conference on Learning Theory2026-02-042026-05-042026-06-29
aa*a1ICMLInternational Conference on Machine Learning2026-01-232026-07-06
cbb1ALTInternational Conference on Algorithmic Learning Theory2025-10-022025-12-182026-02-23
b3ICWLInternational Conference on Web-based Learning2025-09-302025-10-202025-11-30
bba1ICRAInternational Conference on Robotics and Automation2025-09-152026-06-01
ba*b1WSDMInternational Conference on Web Search and Data Mining2025-08-072025-10-232026-02-22
a2HRIInternational Conference on Human-Robot Interaction2024-09-232024-12-022025-03-04
cbACMLAsian Conference on Machine Learning2024-06-262024-09-042024-12-05
b4ICBLInternational Conference on Blended Learning2017-02-282017-03-152017-06-27
aISRInternational Symposium on Robotics2018-06-20