Author: Dr. Stefan Dörr
Autonomy: Game-changer or gimmick?
The concept of “autonomy” in the context of mobile robots has sparked a contentious debate. At the heart of the discussion are two terms: Automated Guided Vehicles (AGVs) and Autonomous Mobile Robots (AMRs). AGVs are associated with the long-established technology of line- guided driverless transport vehicles, while AMRs are equipped with new, primarily software-based capabilities and are considered the latest generation of vehicles. Supporters of AMRs claim that these advanced features have the potential to revolutionize how they are used and operated, while opponents argue that these features are merely a superficial novelty and add little value in highly organized and structured industrial environments. So, what lies at the crux of this debate?
What is ‘autonomy’?
To avoid any philosophical confusion, it’s essential to establish a technical definition of autonomy. According to the US National Institute for Standardization (NIST), autonomy is when “a robot completes its assigned mission within a specific scope, without human intervention, while adapting to environmental and operational conditions.” While this definition acknowledges the context and key features, it implies that autonomy is a binary state, i.e., the robot is either autonomous or not. A more suitable approach is to consider the concept of an “autonomy level,” which encompasses various capabilities and features that increase the robot’s independence from predefined conditions, peripheral equipment, and infrastructure. It also allows for a more nuanced understanding of the distinction between AGVs and AMRs. Rather than categorizing them as separate entities, we can describe them as a continuous spectrum from AGV to AMR, with increasing levels of autonomy.
Do we need autonomy?
Now that we have defined a common understanding of what the term autonomy means, we need to address the question of whether it is required or useful. Looking at our definition above, which basically states that there are certain capabilities or features that increase the autonomy level and thereby help in coping with unforeseen situations and changes, let’s look at a few examples and check their implications.
1. Contour or natural feature-based localization describes approaches for localizing the robot without adjusting the environment or adding artificial markers to it, using the robot’s onboard sensors (typically Lidars or cameras) and SLAM (Simultaneous Localization and Mapping)- based methods, i.e., the generation of the map is carried out using SLAM. Clearly, these approaches reduce the dependency on a specific periphery (mostly the markers) and thereby increase the ability of the robots to robustly localize in environments where markers might be occluded, destroyed, or removed. This makes the robots more flexible and versatile in which environments to operate. Nevertheless, the major advantage comes with the reduced commission and maintenance efforts, as there is no need to equip the environments with the markers. Last but not least, it also reduces the skills needed for setting up the system, thereby enabling quickly trained staff to overtake the commissioning instead of highly skilled experts.
2. Live SLAM is an extension to contour-based localization and SLAM, which are localization systems that can generate the map based on a SLAM component and also update the map during operation. Typical industrial environments are non-static and changing over time, like a warehouse where pallets and boxes are constantly moved around. The map generated during commissioning by a SLAM system continually diverges from the actual environment, leading to a degeneration of the localization system. With a Live SLAM approach, the changes of the environment as seen by the robot’s sensors can be tracked, and the underlying map can be updated, making it less prone to failures in changing environments. The autonomy clearly increases due to the fact that one part of the robotic system (here the localization system) is now able to cope with changes in the environment, again increasing its applicability in a larger range of environments but as well reducing maintenance efforts for manual remapping cycles.
3. Dynamic path planning and obstacle avoidance is the feature commonly associated with autonomy and AMRs. The capability to plan around obstacles detected during runtime with the robot’s onboard sensors. Obviously, and that is also the main argument by the AMR enthusiasts, this increases the ability of the robot to operate in dynamic and more ‘chaotic’ environments, as well as clearly increases the ability to react to unforeseen situations. Opponents might argue, on the other hand, that these situations should not happen in the first place and rather show organizational issues in the warehouse/factory where nothing should stand in the way of the robot. Certainly, both parties have an argument here. In the end, it mostly depends on the application and intralogistics process if this feature is of help or rather trying to compensate for a symptom of an underlying issue (i.e., the badly organized environment). My argumentation in favor of the feature is, therefore, going in a different direction. The ability to dynamically optimize the path reduces the initial efforts to define the routes and paths, which can be complex considering the kinematic constraints, shape of the robot but also safety field dimensions, and environment. By having dynamic path planning capabilities, we can still tell the robot which route to take without needing to precisely define the trajectory. We can even tell it to what extent we allow the robot to deviate from the given route, thereby configuring the autonomy level related to path execution. This significantly reduces commissioning times and makes it easy to adjust routes and the layout, even without needing expert skills.
4. Perception-based docking: The primary function of AGVs and AMRs is load transportation, often involving docking or attachment to infrastructure such as conveyor belts, transfer stations, or loads like pallets, boxes, or carts. Contemporary solutions differ significantly in how they configure or teach robots to execute these docking maneuvers. Traditional approaches “hard-code” the position of the dock, pallet, or cart in the map and guide the robot to that position without sensing. This requires high precision and allows no room for variation in position. In contrast, perception-based methods utilize the robot’s onboard sensors to detect relevant objects and compute relative positions for precise docking maneuvers based on that information. This approach allows a certain tolerance for object position, making robots more robust against slight deviations in object position. It also offers greater flexibility and potentially greater efficiency in process design. This, in turn, allows humans or other machines involved in the process to be less precise when placing a cart or pallet, making the process faster and cheaper.
It is important to note that the four features mentioned are just examples, and there are more in the pipeline due to recent advances in machine learning, computer vision, and robotics. These capabilities generally increases the autonomy level of the robotic system, adding value in terms of versatility, applicability, flexibility, reduced commissioning and maintenance costs, and reduced expertise required to operate the robot. The significance and usefulness of individual features depend on the specific application. Having these capabilities onboard, with an easy way to turn them on or off and configure them for the application’s needs, gives operators the full range of possibilities. In other words, an AMR can always operate as an AGV, but the reverse is not true. This is the great value of autonomy and AMRs. It opens up new possibilities (with more to come) for deploying and operating mobile robots and puts the operator back in the driver’s seat to take advantage of these possibilities. Autonomy for robots thus brings autonomy back to the operators, who can more easily set up and modify the overall system and operate it. Does this sound like a gimmick?
TL;DR
Instead of defining autonomy, and likewise AGV vs. AMR, as a binary state, it is more suitable to talk about an autonomy level, where certain features increase this level. Features such as natural feature localization, Live SLAM, dynamic path planning, and more contribute to a higher autonomy level. The actual value of autonomy capabilities lies in their more versatile and flexible usage, as well as in reduced commissioning and maintenance efforts. However, whether a certain feature actually brings value or rather disturbs the overall process depends on the application. The real game-changing value of autonomy features is having the option to activate and configure the individual features in a simple and intuitive way.
Visit www.node-robotics.com for more information.