Rainer Koch

Sprechstunde:
nach Vereinbarung

Rainer Koch

M. Eng. Dipl.-Ing.

Lehrgebiete

Praktikum mENT1 (Unterlagen im Intranet bei Prof. Dr. A. Kremser)

 

Veröffentlichungen
 
2017:
  • Rainer Koch, Stefan May, and Andreas Nüchter: Effective distinction of transparent and specular reflective objects in point clouds of a multi-echo laser scanner ICAR - 18th International Conference on Advanced Robotics, 2017 (accepted)

    Abstract: A favoured sensor for mapping is a 3D laser scanner since it allows a wide scanning range, precise measurements, and is usable indoor and outdoor. Hence, a mapping module delivers detailed and high resolution maps which makes it possible to navigate safely. Difficulties result from transparent and specular reflective objects which cause erroneous and dubious measurements. At such objects, based on the incident angle, measurements result from the object surface, an object behind the transparent surface, or an object mirrored with respect to the reflective surface. This paper describes an enhanced Pre-Filter-Module to distinguish between these cases. Two experiments demonstrate the usability and show that for single scans the identification of mentioned objects in 3D is possible. The first experiment was made in an empty room with a mirror. The second experiment was made in a stairway which contains a glass door. Further, results show that a discrimination between a specular reflective and a transparent object is possible. Especially for transparent objects the detected size is restricted to the incident angle. That is why future work concentrates on implementing a post-filter module. Gained experience shows that collecting the data of multiple scans and postprocess them as soon as the object was bypassed will improve the map.


  • Rainer Koch, Stefan May, Lena Böttcher, Maximilian Jahrsdörfer, Johannes Maier, Malte Trommer, and Andreas Nüchter: Out of lab calibration of a rotating 2d scanner for 3d mapping. Proc. SPIE 10332, Videometrics, Range Imaging, and Applications XIV, 1033207, 06/2017 (accepted)

    Abstract: Mapping is an essential task in mobile robotics. To fulfil advanced navigation and manipulation tasks a 3D rep- resentation of the environment is required. Applying stereo cameras or Time-of-flight cameras (TOF cameras) are one way to archive this requirement. Unfortunately, they suffer from drawbacks which makes it difficult to map properly. Therefore, costly 3D laser scanners are used. An inexpensive way to build a 3D representation is to use a 2D laser scanner and rotate the scan plane around an additional axis. A 3D pointcloud acquired with such a custom device consists of multiple 2D line scans. The pose of each line scan need to be determined to generate a 3D pointcloud. The pose consists of the encoder feedback as well as parameters resulting from a calibration. Using external sensor systems are a common method to determine these calibration parameters. This is costly and difficult when the robot needs to be calibrated outside the lab. Thus, this work presents a calibration method applied on a rotating 2D laser scanner. It uses a hardware setup to determine the required parameters for calibration. This hardware setup is light, small, and easy to transport. Hence, an out-of-lab calibration is possible. The hardware components of the 3D scanner system are an HOKUYO UTM-30LX-EW 2D laser scanner, a dynamixel servo-motor, and a control unit. The calibration system consists of a hemisphere. In the inner of the hemisphere a circular plate is mounted. The algorithm needs to be provided with a dataset of a single rotation from the laser scanner. To achieve a proper calibration result the scanner needs to be located in the middle of the hemisphere. By means of geometric formulas the algorithms determine the individual deviations of the placed laser scanner. In order to minimize errors, the algorithm solves the formulas in an iterative process. To verify the algorithm the laser scanner was mounted differently, the scanner position and the rotation axis was modified. In doing so, every deviation, was compared with the algorithm results. Several measurement settings were tested repeatedly. Additionally, the length deviation of the laser scanner is determined as there is an increased influence on the deviations during the measurement.


  • Rainer Koch, Stefan May, Andreas Nüchter: DETECTION AND PURGING OF SPECULAR REFLECTIVE AND TRANSPARENT OBJECT INFLUENCES IN 3D RANGE MEASUREMENTS 6th ISPRS International Workshop 3D-ARCH 2017: "3D Virtual Reconstruction and Visualization of Complex Architectures", Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., Nafplio, Greece, March 2017 (accepted)

    Abstract: 3D laser scanners are favoured sensors for mapping in mobile service robotics at indoor and outdoor applications, since they deliver precise measurements at a wide scanning range. The resulting maps are detailed since they have a high resolution. Based on these maps robots navigate through rough terrain, fulfil advanced manipulation, and inspection tasks. In case of specular reflective and transparent objects, e.g., mirrors, windows, shiny metals, the laser measurements get corrupted. Based on the type of object and the incident angle of the incoming laser beam there are three results possible: a measurement point on the object plane, a measurement behind the object plane, and a measurement of a reflected object. It is important to detect such situations to be able to handle these corrupted points. This paper describes why it is difficult to distinguish between specular reflective and transparent surfaces. It presents a 3D-Reflection-Pre-Filter Approach to identify specular reflective and transparent objects in point clouds of a multi-echo laser scanner. Furthermore, it filters point clouds from influences of such objects and extract the object properties for further investigations. Based on an Iterative-Closest-Point-algorithm reflective objects are identified. Object surfaces and points behind surfaces are masked according to their location. Finally, the processed point cloud is forwarded to a mapping module. Furthermore, the object surface corners and the type of the surface is broadcasted. Four experiments demonstrate the usability of the 3D-Reflection-Pre-Filter. The first experiment was made in a empty room containing a mirror, the second experiment was made in a stairway containing a glass door, the third experiment was made in a empty room containing two mirrors, the fourth experiment was made in an office room containing a mirror. This paper demonstrate that for single scans the detection of specular reflective and transparent objects in 3D is possible. It is more reliable in 3D as in 2D. Nevertheless, collect the data of multiple scans and post-filter them as soon as the object was bypassed should pursued. This is why future work concentrates on implementing a post-filter module. Besides, it is the aim to improve the discrimination between specular reflective and transparent objects.


  • Rainer Koch, Stefan May, Andreas Nüchter, and Murrmann Patrick: Identification of Transparent and Specular Reflective Material in Laser Scans to Discriminate Affected Measurements for Faultless Robotic SLAM. Journal of Robotics and Autonomous Systems (DOI: 10.1016/j.robot.2016.10.014) (accepted)

    Abstract: Mapping with laser scanners is the state-of-the-art method applied in service, industrial, medical, and rescue robotics. Although a lot of research has been done, maps still suffer from interferences caused by transparent and specular reflective objects. Glass, mirrors, shiny or translucent surfaces cause erroneous measurements depending on the incident angle of the laser beam. In past experiments the Mirror Detector Approach was implemented to determine such measurements with a multi-echo laser scanner. Recognition values are based on their differences in recorded measurements in regard to the distance of the echoes. This paper describes the research to distinguish between reflective and transparent objects. The implemented Mirror Detector was specifically modified for recognition of said objects for which four experiments were conducted; one experiment to show the map of the original Mirror Detector; two experiments to investigate intensity characteristics based on angle, distance, and material; and one experiment to show an applied discrimination with the extended version of the Mirror Detector, the Reflection Classifier Approach. To verify the results, a comparison with existing models was performed. This study showed that shiny metals, like aluminium, etc., provide significant characteristics, while mirrors are to be characterized by a mixed model of glass and shiny metal. Transparent objects turned out to be challenging because their appearance in the sensor data strongly depends on the background. Nevertheless, these experiments show that discrimination of transparent and reflective materials based on the reflected intensity is possible and feasible.


2016:
  • Philipp Koch, Stefan May, Michael Schmidpeter, Markus Kühn, Christian Pfitzner, Christian Merkl, Rainer Koch, Martin Fees, Jon Martin, Daniel Ammon, and Andreas Nüchter: Multi-Robot Localization and Mapping Based on Signed Distance Functions. Journal of Intelligent & Robotic Systems (DOI: 10.1007/s10846-016-0375-7) (accepted)

    Abstract: This publication describes a 2D Simultaneous Localization and Mapping approach applicable to multiple mobile robots. The presented strategy uses data of 2D LIDAR sensors to build a dynamic representation based on Signed Distance Functions. Novelties of the approach are a joint map built in parallel instead of occasional merging of smaller maps and the limited drift localization which requires no loop closure detection. A multi-threaded software architecture performs registration and data integration in parallel allowing for drift-reduced pose estimation of multiple robots. Experiments are provided demonstrating the application with single and multiple robot mapping using simulated data, public accessible recorded data, two actual robots operating in a comparably large area as well as a deployment of these units at the Robocup rescue league.


2015:
  • Rainer Koch, Stefan May, Philipp Koch, Markus Kühn, and Andreas Nüchter: Detection of Specular Reflections in Range Measurements for Faultless Robotic Slam. In Lus Paulo Reis, Antnio Paulo Moreira, Pedro U. Lima, Luis Montano, and Victor Muoz-Martinez, editors, Robot 2015: Second Iberian Robotics Conference, volume 417 of Advances in Intelligent Systems and Computing, pages 133–145. Springer International Publishing, 2016 (accepted)

    Abstract: Laser scanners are state-of-the-art devices used for mapping in service, industry, medical and rescue robotics. Although a lot of work has been done in laser-based SLAM, maps still suffer from interferences caused by objects like glass, mirrors and shiny or translucent surfaces. Depending on the surface’s reflectivity, a laser beam is deflected such that returned measurements provide wrong distance data. At certain positions phantom-like objects appear. This paper describes a specular reflectance detection approach applicable to the emerging technology of multi-echo laser scanners in order to identify and filter reflective objects. Two filter stages are implemented. The first filter reduces errors in current scans on the fly. A second filter evaluates a set of laser scans, triggered as soon as a reflective surface has been passed. This makes the reflective surface detection more robust and is used to refine the registered map. Experiments demonstrate the detection and elimination of reflection errors. They show improved localization and mapping in environments containing mirrors and large glass fronts is improved.


  • Markus Kühn, Stefan May, and Rainer Koch: Benchmarking the Pose Accuracy of Different Slam Approaches for Rescue Robotics. In ARC, 2015. (accepted)

    Abstract: SLAM! (SLAM!) is essential for a mobile robot. Localizing itself and obtaining information about the environment qualifies the robot to interact with the environment. For this reason different approaches for SLAM! are used in the robotics community. In the RoboCup Rescue Challenge most teams use the hector slam or gmapping approach. Hence it is essential to obtain accurate estimates of the robots position and the surrounding environment, the aim of this paper is to compare those approaches with the tsd slam which was developed in the last years at the NIT! (NIT!). Finally, this will evaluate the quality of our SLAM! approach in comparison to other state of the art approaches.


  • Philipp Koch, Stefan May, Michael Schmidpeter, Markus Kühn, Jon Martin, Christian Pfitzner, Christian Merkl, Martin Fees, Rainer Koch, and Andreas Nüchter: Multi-Robot Localization and Mapping Based on Signed Distance Functions. In Autonomous Robot Systems and Competitions (ICARSC), 2015 IEEE International Conference on, 03/2015 2015 (accepted)

    Abstract: This publication describes a 2D Simultaneous Lo- calization and Mapping approach applicable to multiple mobile robots. The presented strategy uses data of 2D LIDAR sensors to build a dynamic representation based on Signed Distance Functions. A multi-threaded software architecture performs reg- istration and data integration in parallel allowing for drift- reduced pose estimation of multiple robots. Experiments are provided demonstrating the application with single and multiple robot mapping using simulated data, public accessible recorded data as well as two actual robots operating in a comparably large area.


2014:
  • Stefan May, Rainer Koch, Andreas Nüchter, Christian Pfitzner, Philipp Koch, and Christian Merkl: A Generalized 2D and 3D Multi-Sensor Data Integration Approach Based on Signed Distance Functions for Multi-Modal Robotic Mapping. In Vision, Modeling and Visualization, 10/2014 (accepted)

    Abstract: This paper describes a data fusion approach for 3D sensors exploiting assets of the signed distance function. The objectoriented model is described as well as the algorithm design. We developed a framework respecting different modalities for multi-sensor fusion, 3D mapping and object localization. This approach is suitable for industrial applications having need for contact-less object localization like bin picking. In experiments we demonstrate 3D mapping as well as sensor fusion of a structured light sensor with a Time-of-Flight (ToF) camera.


  • Christian Pfitzner, Wolfgang Antal, Peter Hess, Stefan May, Christian Merkl, Philipp Koch, Rainer Koch, and Max Wagner: 3D Multi-Sensor Data Fusion for Object Localization in Industrial Applications. In ISR/Robotik 2014; 41st International Symposium on Robotics; Proceedings of, pages 1–6, June 2014 (accepted)

    Abstract: This paper describes a data fusion approach for 3D sensors exploiting assets of the signed distance function. The objectoriented model is described as well as the algorithm design. We developed a framework respecting different modalities for multi-sensor fusion, 3D mapping and object localization. This approach is suitable for industrial applications having need for contact-less object localization like bin picking. In experiments we demonstrate 3D mapping as well as sensor fusion of a structured light sensor with a Time-of-Flight (ToF) camera.


2012:
  • Stefan May, Rainer Koch, Robert Scherlipp, Andreas Nüchter: Robust Registration of Narrow-Field-of-View Range Images. In Proceedings of the 10th International IFAC Symposium on Robot Control (SYROCO '12), Dubrovnik, Croatia, September 2012 (acepted)

    Abstract: This paper focuses on range image registration for robot localization and environment mapping. It extends the well-known Iterative Closest Point (ICP) algorithm in order to deal with erroneous measurements. The dealing with measurement errors originating from external lighting, occlusions or limitations in the measurement range is only rudimentary in literature. In this context we present a non-parametric extension to the ICP algorithm that is derived directly from measurement modalities of sensors in projective space. We show how aspects from reverse calibration can be embedded in search-tree-based approaches. Experiments demonstrate the applicability to range sensors like the Kinect device, Time-of-Flight cameras and 3D laser range nders. As a result the image registration becomes faster and more robust.