|  Stories

NODE T-Talk #01 with Christopher Zündorf

Meet: Christopher Zündorf

Christopher Zündorf has been Head of UI/UX at NODE since March 2021. Before joining NODE, he studied business informatics in Konstanz and gained experience in various companies – be it in medium-sized companies such as Vector informatics or corporations such as SAP, 1&1 and IBM. Startups, however, have always been interesting to him. He likes the idea of helping to shape companies from the ground up and leaving the comfort zone of a larger company. That’s why NODE, as a young startup in an innovative industry, fit the bill.

Christopher and his team take care of the development of the front-end application for NODE.OS. The goal is to visualize sensor data and positions of robots as well as fleets of autonomous mobile robots and make them controllable via the frontend. In principle, this allows the user to be guided through the entire workflow – from setting up AMRs, creating maps and transferring them to other robots, to monitoring and generating missions.


Hi Christopher. Tell us about your daily work. What do you do at NODE? 

First of all, I lead the team of the UI/UX department. We develop the web front-end for NODE.OS using Typescript and Angular to visualize the robot data and send commands to the AMRs. In this process, I’m not only acting as a “people manager”, but I’m also doing some development. When my teammates have questions, we often solve the problems together in screen-sharing sessions. The squad model has mixed up the responsibilities for issues. For example, with the Live Environment Server or Rosbag Player, we work closely with the Operations Chapter to find solutions. 


What are challenges you face in your job? 

In my team, almost 100% of the work is done remotely. Some of them work part-time. Under these conditions, it is definitely a challenge to distribute the necessary information properly. But basically, we are very respectful to each other in our team, which is why remote work collaboration works amazingly well.


Is there a project from the past that has particularly helped you in your tasks at NODE? 

Yes, there is actually one project. What has helped me the most in my role at NODE Robotics is actually a private project. It’s a 3D adventure framework based on web technologies with pixel art design. The knowledge I gained there quickly proved useful for the front-end UX for NODE.OS.

_

His direction to his later profession was actually clear to him early on. Even in his school days, he enjoyed designing websites. While still a student, he briefly switched to the Java backend, but was quickly drawn back to web technologies and mobile web development. Zündorf worked on UI frameworks for SAP and 1&1 for several years. Knowing how both sides of HTTP network communication and SQL or MongoDB work is very helpful, so he has also switched “sides” throughout his career, mastering backend development in Java, PHP or JavaScript. 

_


How do you build a user interface from scratch? 

We act according to the principle: “Make it work, make it awesome”. The approach is to create prototypes with source code rather than tinkering with concepts forever in Adobe xD. Starting from the prototypes, iterative coordination is then carried out with the user to determine the direction in which things should go. There are basic approaches that should be considered: “How many clicks are needed to do XY?” It is also important to know the specific customer and his ideas. In which environment should the software be used? What does the customer want to achieve with the software? Based on this knowledge, you can also create very good expectations in unit and visual tests. From a flow perspective, we often connect and specify the UI backend/fleet management first. Then we create basic interaction elements like buttons and equip them with functions. We end up formatting and polishing the UI once the basic function has been implemented and validated with unit tests. We also often run multiple rounds to keep improving different dialogs and features. Currently, the UI team is working hard on improving the front-end landing page. And that actually takes days and weeks.


What were the major milestones that significantly improved the NODE UI? 

Major milestones were, for example, the complete representation of the robot position or movement as well as paths and zones. But also the correct connection to the fleet management and the 3D view for the fleet AMRs were of great importance. Last but not least is the implementation of the Rosbag Player to analyze the calculated navigation paths. The next step is currently the connection to the Live Environment Server to enable the exchange of map data between AMRs and Fleet Management.


What tools/programs do you use and why?

Among my frequently used tools, I count Visual Studio Code for general software development. Inkscape for creating SVGs and graphics for the front end. Notion for organizing product ideas and notes. Blender to create 3D models in the production scenario is also worth mentioning.


Isn't it unusual to work with 3D models as a front-end designer?

Fortunately, not anymore. For the past few years, browsers have been able to do a lot of things that only desktop applications used to be able to do. WebGL and 3D rendering are no longer a problem. Of course, you should take advantage of the possibilities. Fortunately, we have a very talented student who takes care of creating/editing the 3D models in Blender. I then take care of the integration into the web application.


Where do you get the inspiration for the design, and how do you keep up to date?

Nowadays, nothing gets past YouTube. Especially, there are many sources of inspiration, such as Tesla Full-Self-Driving or Boston Dynamics robots. But you can also get inspiration from video games. Production planning games or strategy games are very valuable in this respect. Our fleet management monitoring in particular is very reminiscent of a strategy game and, at best, should be just as much fun. But the daily discussions with colleagues also provide room for new ideas. It’s great how many ideas arise in conversations. Unfortunately, there is still a lack of time and employees to implement them.


Is there a difference in the approach to creating a user interface for mobile or desktop applications?

In general, the mobile-first approach is very beneficial here. If something works on a small screen, it will automatically work on the larger desktop. It is helpful to be prepared from the beginning that the application will not have at least 1024×768 pixels available. This automatically results in requirements for prioritizing the user interface. What do we hide? What must always remain visible? The less space is available on the screen, the more difficult the design of the user interface becomes.


What kind of ideas could be interesting for the front-end development in the future?

Actually, I can imagine the user interface without the graphics part, you could implement many parts via voice control and audio output. At least as far as creating orders is concerned. However, for the representation of sensor data and robot position, the graphical representation remains the means of choice. It does not make sense to implement the robot position like an audio book. The situation is quite different in the other direction. To control a robot, one does not necessarily need a graphical user interface. The user interface can also be controlled by speech. Speech is the first and most natural form of communication. Anyone can say, “Robot 1 please move to the high rack.” Systems like Siri and Amazon Echo have shown how simple and natural this form of user interface can be. Basically, the less knowledge and effort required to give the robot a command, the better. The possibilities of enabling control via cell phones and smartwatches are also interesting. In this way, the smartwatch could become the command center for an entire AMR fleet. That certainly sounds exciting.


What is the value of UI/UX for a company entering the market with a purely digital product?

Even though our software is only a “digital product,” it still has something very special about it: it controls and moves objects in physical reality. That is a big responsibility in terms of the quality and reliability of the software. The robots themselves are, of course, “intrinsically safe.” That is, they have built-in safety systems that prevent accidents. However, I would argue that many people approach the subject of robotics with great skepticism. Various science fiction movies show this very clearly. Everyone knows the movies in which robots with artificial intelligence take over the world. Many people are fascinated by robotics, others are afraid of it. Therefore, the software used to control it must build trust in the system and convey control. The quality standard of the software must be correspondingly high. Errors in the visualization of the robot’s position would destroy trust. If, for example, a robot suddenly moves five meters further on the overview map, the user thinks the software doesn’t know what it’s actually doing. That’s why we attach great importance to making the visualization and control as error-free as possible. In the area of unit tests, we have a high level of code coverage, and for visual tests we will continue to expand the test suite. The software must be fast and reliable. The operation of the AMRs must be intuitive and child’s play. The motto “this will work somehow” should not be our benchmark.


What do you think is the next major innovation in UI/UX?

The strategy game example I mentioned earlier shows how our software should be in terms of visualization and control of AMRs. The same responsiveness, the same control patterns, the same intuitive usability. Every keystroke, every extra click, we should consider as an effort for the user. Sending a job to a robot should require no more than three clicks. The operation of the application must be pleasant for the user, the visualization very appealing, like a 3D game. The user should be able to immediately recognize the objects from reality without much effort.


Speaking of gamification, can you give an example of good UI/UX and one of bad?

I appreciate the Material Design of Android and the Android ecosystem in general. The fact that Android is now running in cars and on TVs is great because the basic design and operating patterns are just familiar. If you know how to operate your smartphone, you’ll know how to operate the infotainment of a Volvo car in the future. Uber, for example, also offers a really outstanding user experience. Not just the application, but the entire experience. Let’s say you’re in Stuttgart and you need an Uber. You install the app, open the app, select the vehicle and destination in it, and in a minute or two the Uber shuttle is ready. The bill is automatically paid by credit card. Mobility simply means two taps on the screen. That’s really impressive. One poor UX in my opinion is the ticket machine at the station. This is poorly responsive, slow, has way too many buttons, and requires a lot of interaction. The user is already stressed out, and then he has to deal with bad software as well. You can tell that the software was not developed with the specific use case in mind.


Last question. What do you think is more likely to happen faster? A million robo-taxis from Tesla or 10,000 robots equipped with NODE's software?

For sure 10,000 times installed NODE Robotics software.


_
The interview was led by Susanne Gottschaller and Joshua Balz.