Autonomy and Perception

Autonomy and Perception are two key aspects of modern robotics. Our work spans both of these areas. In particular, we employ vision and radar technologies and apply them to the development of autonomous robots. Regarding autonomy, we use both classical and modern control methods, as well as reinforcement learning.

Research topics

Sample Efficient RL

Sample-efficient.png

Improve RL sample efficiency with two new tools: Episodic Noise and Difficulty Manager.

Read more

Safe Reinforcement Learning

Investigation on Safe RL algorithm performance on a realistic industrial robot (a Driveable Vertical Mast Lift).

Read more

Point2Depth: Radar Point Cloud to Depth Image

Point2depth.png

In this work we designed a contrastive learning-based technique to translate mmWave radar point clouds to depth images with Point2Depth.

Read more

GT-MilliNoise: Graph transformer for point-wise denoising of indoor millimetre-wave point clouds

Millinoise-1.gif Millinoise-2.gif Millinoise-3.gif

GT-MilliNoise is a learning-based denoiser for indoor millimetere-wave radars point cloud. We also released MilliNoise a dataset that collects mmWave radar point cloud and labels each point with a sub-millimetric accuracy.

Read more

APEIRON: a Multimodal Drone Dataset Bridging Perception and Network Data


APEIRON is a multimodal aerial dataset bridging the gap between perception and network data in outdoor environments, fostering multidisciplinary research.

Read more