Interested in testing NODE.OS on your robot? Contact us for an Experience Package.

 |  Interviews

NODE T-Talk with Abhilash Nand Kumar

Meet: Abhilash Nand Kumar

Abhilash started working as an application engineer for NODE in 2021. He has always been drawn to the concept of developing systems that operate and respond based on logical principles. During his time as a bachelor’s student, he enrolled in an online course that introduced him to the fundamentals of Robotics and Computer Vision. This passion led him to pursue a master’s degree in Robotics and do several internships. After graduation, he joined our team here at NODE, where he continues to apply and expand his knowledge in this exciting field.


Hi Abhilash, tell us about your daily work. What do you do at NODE? 

As a team lead for Computer Vision, my day is generally divided between writing code and overseeing the development efforts of our team. I spend a significant amount of time working on our core navigation-based modules, which are responsible for making robots navigate unknown environments. This involves writing code in ROS and C++, as well as coordinating with other members of the team to ensure that we are meeting our goals and deadlines.


What are some of the most important skills you use in your work as a Software Engineer, and how did you develop them? 

One of my most important skill is Problem-Solving. I have developed this skill by practicing critical thinking, analyzing complex issues, and brainstorming creative solutions. Furthermore, I seek feedback from my team members and learn from their approaches. 

In addition to that, I think collaboration is an essential skill for making great products. I try to work on it by operating cross-functional projects, building relationships with stakeholders, and practicing effective teamwork. 

Lastly, I stay up-to-date by reading research papers, following discussions online and looking at the Robotics communities outside AMRs such as Autonomous Driving to draw inspiration from them.


As you said before, you are a Team Leader since February 2022.
What do you think are significant skills for this position? 

As a team lead in computer vision, there are several significant skills that I use to manage our team effectively and ensure successful project outcomes. 

First is technical expertise that I provide for my team. This allows me to contribute guidance and support to my team members when they encounter challenges. 

Equally important are the skills of communication and collaboration to manage my team, including setting goals, delegating tasks, and providing feedback and guidance. This includes the clearly communicating project requirements and ensuring that team members are aware of their responsibilities and deadlines. And on the other hand, provide updates to stakeholders, work with other teams and ensure that the project is on track and that all deliverables are met.


What are technical challenges you face at your Job?

One of the main challenges we face is coming up with solutions that are not specific to a particular use case but generalizable and reusable. While SLAM can be effective in static environments, the real world at our customer sites is dynamic, with environments and objects constantly changing. This presents new challenges, such as dynamic objects, occlusion, computational complexity and sensor noise. 

In dynamic environments, objects are constantly moving, and this can cause problems for SLAM as If an object moves, it can introduce artifacts in the map which would then lead to positioning inaccuracies. Moving objects can also occlude other objects, making them invisible to the robot’s sensors. This can result in missing data, which can lead to localization and mapping errors. 

In order to support a broad range of customer robots with varying compute capabilities, it is essential that our algorithms run with maximum efficiency, while maintaining precision at the centimeter level. This leads to the challenge of computational complexity. 

The final challenge is sensor noise, as we strive to provide a plug-and-play solution that can accommodate a broad range of sensors, regardless of their type or noise level. These sensors are susceptible to measurement errors or noise, which sometimes lead to incorrect data association and misalignment between the estimated robot position and the map.


Can you explain in simple terms what our modules included in our Robot Autonomy Skills do and how it helps robots navigate unknown environments? 

The first step of deploying an AMR in an environment is to generate the map representation of this environment. By building a map, a robot can better understand where it is with respect to its surroundings and plan its motion accordingly. Localization refers to the ability of a robot to determine its position and orientation relative to a known map of its environment. It is important because without knowing where it is, a robot cannot accurately plan its movements or carry out tasks in a reliable manner. This can be particularly important in situations where the environment is constantly changing, such as in a warehouse or manufacturing plant. Our module supports a lifelong SLAM approach, which continuously updates the map of the environment as the robot drives through it. Together, localization and mapping are prerequisites to enable a robot to navigate through unknown spaces, avoid obstacles and perform tasks more efficiently and with greater precision.


What new features is your team currently working on to make robot detection and docking easier?

Detection and docking are key functions for mobile robots, allowing them to navigate autonomously to a Point of Interest for cargo pickup or drop-off or to dock with a charging station. These functions are usually designed for specific use cases on a robot-to-robot basis. We are currently developing several new features to make this more widely applicable, so that our customers may theoretically detect and dock their robots to any object that was previously considered too complex or irregularly shaped. By doing so, we hope to provide our customers with the ability to detect and dock their robots with a larger variety of objects, greatly expanding the potential use cases for our technology. Overall, these new features will greatly improve the versatility of mobile robots, allowing them to be used in a wider range of applications and environments.


What are some of the benefits of developing collaborative map sharing and supporting new types of sensors, and are their challenges for your team?

One area where our customers can benefit greatly is by allowing robots to share information about their surroundings with each other. Real-time map sharing among multiple robots operating in the same environment can be useful in a variety of scenarios, such as in large warehouses or factories where multiple robots need to navigate and collaborate to complete tasks. This approach can improve operational efficiency by allowing multiple robots to work together without interfering with each other, as well as enhance map accuracy, especially in large and complex environments where a single robot may not be able to capture all the necessary data. 

We are actively working towards integrating data from different types of sensors, such as cameras, LIDARs, and IMUs, to overcome some of the limitations of 2D Lidar SLAM and provide a more complete and accurate map of the environment. One of the main advantages of multi-sensor SLAM support is that it can reduce the impact of sensor noise on the SLAM algorithm. By using multiple sensors that provide complementary information, the algorithm can filter out noise and obtain more accurate measurements. Another advantage of multi-sensor SLAM is that it can improve the robustness of the algorithm in dynamic environments. By using sensors that can detect different types of motion, such as LIDAR that can detect moving objects and IMU that can detect the robot’s own motion, the algorithm can better track the robot’s position and map the environment, even in the presence of moving objects. However, multi-sensor SLAM also poses some challenges. One challenge is the complexity of integrating data from multiple sensors, which requires careful calibration and synchronization of sensor data. Another challenge is the computational complexity of the algorithm, as data fusion and filtering techniques can be computationally intensive.


What are some of the most exciting trends or developments in computer vision and mapping technology that you see emerging in the near future?

In terms of emerging trends, I am particularly excited about the increasing number of AMR manufacturers who are now installing cameras and lidars on their robots. This means that we potentially have access to a wider variety of sensor data as input and can develop more robust algorithms around them. Although commercially manufactured AMRs with GPUs are not yet widespread, another area that excites me is the increased application of deep learning in perception and SLAM. I believe that in the medium to long term, these technologies will revolutionize the field and open new possibilities for the development of intelligent robots and autonomous systems. 


What advice would you give to someone who is interested in pursuing a career in Robotics?

If you’re thinking about going into Robotics, I’d say start by building your own projects. Even small ones can help you learn and improve your skills. Nowadays, you can find lots of resources online to get started, like tutorials and open-source software and hardware. As this field is still developing, there is always something new to discover and explore. Therefore, maintaining a curious mindset and being an active learner can be extremely beneficial.


_
The interview was led by Nadine Trommeshauser.