Various research video demos with links to available open access manuscripts, open source software and datasets.

Robotic Grasping on Monocular Images

Issue: robotic grasping of previously unseen objects.

Approach: a continuous Gaussian representation of annotated grasps achieves a higher success rate on a simulated robotic grasping benchmark when using a neural network based generative grasping model.

Application: improved grasping success rates when tested on the same simulated robot arm by avoiding collisions with the object: achieving 87.94% accuracy.

Direct transfer to a real robotic arm, with high inference speed and high grasp success rates, without the need for transfer learning.

2 results

2022

[prew22grasping] Evaluating Gaussian Grasp Maps for Generative Grasping Models (W. Prew, T.P. Breckon, M.J.R. Bordewich, U. Beierholm), In Proc. Int. Joint Conf. Neural Networks, IEEE, pp. 1-9, 2022.Keywords: robotic manipulation, robot grasping, depth images. [bibtex] [pdf] [doi] [arxiv] [demo] [software]

2020

[prew20grasping] Improving Robotic Grasping on Monocular Images Via Multi-Task Learning and Positional Loss (W. Prew, T.P. Breckon, M.J.R. Bordewich, U. Beierholm), In Proc. Int. Conf. Pattern Recognition, IEEE, pp. 9843-9850, 2020.Keywords: robotic manipulation, robot grasping, depth images. [bibtex] [pdf] [doi] [arxiv] [talk] [poster]

Brain-Computer Interface for Real-time Humanoid Robot Navigation

Issue: variable position and size SSVEP stimuli for real-time teleoperation BCI application.

Approach: Variable position and size SSVEP stimuli, based on real-time object detection pixel regions, within the live video stream from a teleoperated humanoid robot traversing a natural environment. CNN architecture for scene object detection and dry-EEG bio-signal decoding.

Application: Demonstrable real-time BCI teleoperation of a humanoid robot, based on the use of naturally occurring in-scene stimuli.

Successful use of a novel variable SSVEP BCI (varying: pixel pattern + region size,/shape).

CNN based real-time decoding of dry-EEG bio-signals for interactive BCI applications.

1 result

2019

[aznan19navigation] Using Variable Natural Environment Brain-Computer Interface Stimuli for Real-time Humanoid Robot Navigation (N.K.N. Aznan, J. Connolly, N. Al Moubayed, T.P. Breckon), In Proc. Int. Conf. Robotics and Automation, IEEE, pp. 4889-4895, 2019.Keywords: ssvep, brain computer interface, bci, cnn, neural networks, convolutional neural networks, deep learning, dry-eeg, robot guidance. [bibtex] [pdf] [doi] [arxiv] [demo] [poster]

Dense Gradient-based Features (DeGraF)

Issue: a computationally efficient approach for the extraction of dense gradient-based features based on the use of localized intensity-weighted centroids within the image.

Approach: Whilst prior work concentrates on sparse feature derivations or computationally expensive dense scene sensing, we show that Dense Gradient-based Features (DeGraF) can be derived based on initial multi-scale division of Gaussian preprocessing, weighted centroid gradient calculation and either local saliency (DeGraF-α) or signal-to-noise inspired (DeGraF-β) final stage filtering.

Application: DeGraF shown to perform admirably against the state of the art in terms of feature density, computational efficiency and feature stability..

Our approach is evaluated under a range of environmental conditions typical of automotive sensing applications with strong feature density requirements

1 result

2016

[katramados16degraf] Dense Gradient-based Features (DeGraF) for Computationally Efficient and Invariant Feature Extraction in Real-time Applications (I. Katramados, T.P. Breckon), In Proc. Int. Conf. on Image Processing, IEEE, pp. 300-304, 2016.Keywords: dense features, feature invariance, feature points, intensity weighted centroids, automotive vision. [bibtex] [pdf] [doi] [demo]

Multi-Modal Target Detection for Autonomous Wide Area Search and Surveillance

Issue: the realization of a real-time methodology for the automated detection of people and vehicles using combined visible-band (EO), thermal-band (IR) and radar sensing from a deployed network of multiple autonomous platforms (ground and aerial).

Approach: A range of automatic classification approaches are proposed, driven by underlying machine learning techniques, that facilitate the automatic detection of either target type with cross-modal target confirmation.

Application: Generalised wide are search and surveillance is a common-place tasking for multi-sensory equipped autonomous systems. Here we present on a key supporting topic to this task - the automatic interpretation,fusion and detected target reporting from multi-modal sensor information received from multiple autonomous platforms deployed for wide-area environment search.

This facilities real-time target detection, reported with varying levels of confidence, using information from both multiple sensors and multiple sensor platforms to provide environment-wide situational awareness.

Extended results present both people and vehicle detection under varying conditions in both isolated rural and cluttered urban environments with minimal false positive detection. Episodic target detection, evaluated over a number of wide-area environment search and reporting tasks, generally exceeds 90%+ for the targets considered here.

1 result

2013

[breckon13autonomous] Multi-Modal Target Detection for Autonomous Wide Area Search and Surveillance (T.P. Breckon, A. Gaszczak, J. Han, M.L. Eichner, S.E. Barnes), In Proc. SPIE Emerging Technologies in Security and Defence: Unmanned Sensor Systems, SPIE, Volume 8899, No. 01, pp. 1-19, 2013.Keywords: autonomous robots, grand challenge, wide area search, search and rescue, UAV, infrared, thermal. [bibtex] [pdf] [doi] [demo]

Cross-Spectral Visual Simultaneous Localization And Mapping (SLAM)

Issue: the classic problem of robot navigation via visual simultaneous localization and mapping (SLAM), but introducing the concept of dual optical and thermal (cross-spectral) sensing with the addition of sensor handover from one to the other.

Approach: we use a novel combination of two primary sensors: co-registered optical and thermal cameras. Mobile robot navigation is driven by two simultaneous camera images from the environment over which feature points are extracted and matched between successive frames. A bearing-only visual SLAM approach is then implemented using successive feature point observations to identify and track environment landmarks using an extended Kalman filter (EKF).

Application: Six-degree-of-freedom mobile robot and environment landmark positions are managed by the EKF approach illustrated using optical, thermal and combined optical/thermal features in addition to handover from one sensor to another. Sensor handover is primarily targeted at a continuous SLAM operation during varying illumination conditions (e.g., changing from night to day).

The final methodology is tested in outdoor environments with variation in the light conditions and robot trajectories producing results that illustrate that the additional use of a thermal sensor improves the accuracy of landmark detection and that the sensor handover is viable for solving the SLAM problem using this sensor combination.

1 result

2013

[magnabosco13slam] Cross-Spectral Visual Simultaneous Localization And Mapping (SLAM) with Sensor Handover (M. Magnabosco, T.P. Breckon), In Robotics and Autonomous Systems, Volume 63, No. 2, pp. 195-208, 2013.Keywords: cross-spectral SLAM, sensor handover, self-localisation and mapping, thermal imagery, multi-modal SLAM, optical thermal SLAM. [bibtex] [pdf] [doi] [demo]

Video Stabilization for Tele-operated Robots

Issue: Within tele-operated mobile robotics, the resulting video imagery frequently suffers from vibration artefacts compromising the accuracy, longevity and security of operation.

Approach: Without prior knowledge of the robot ego-motion (vibration characteristics) we develop a novel four stage filtering approach to identify robust Local Motion Vectors (LMV) for Global Motion Vector (GMV) estimation in successive video frames whilst preserving the required real-time responsiveness for tele-operation.

Application: We aim to automatically remove these unwanted visual effects using a novel real-time video stabilization approach

Prior work for hand-held and vehicle mounted cameras is ill-suited to the high-frequency, large magnitude (1015% of image size) vibration encountered on the short wheelbase, non-suspended robotic platforms typically deployed for such tasks.

Experimental results show that our method provides both significant qualitative visual improvement and a quantitative reduction in measurable video image displacement.

1 result

2013

[chereau13stablization] Robust Motion Filtering as an Enabler to Video Stabilization for a Tele-operated Mobile Robot (R. Chereau, T.P. Breckon), In Proc. SPIE Electro-Optical Remote Sensing, Photonic Technologies, and Applications VII, SPIE, Volume 8897, No. 01, pp. 1-17, 2013.Keywords: tele-operation, EOD, stablization, motion vector filtering, vibration. [bibtex] [pdf] [doi] [demo]

MoD Grand Challenge - Real-time Object Detection

The Challenge: “Create a system with a high degree of autonomy that can detect, identify, monitor and report a comprehensive range of military threats in an urban environment” - UK Ministry of Defence (2008).

A competitive challenge set by the UK Ministry of Defence with 16 competing teams, drawn from UK academia and industry, competing in a finale competition in Copehill Down Village (Fighting In Built Up Areas - training facility).

Approach: Sensing & Autonomous Tactical Urban Reconnaissance Network (SATURN) - a connected sensing network of autonomous High Level Unmanned Aerial Vehicle (UAV), Micro UAV and Unmanned Ground Vehicle (UGV) provide optical/thermal imagery for automated target detection and subsequent platform tasking.

Sensing is provided from optical image, thermal image and radar feeds from the UGV, optical imagery from the MAV and optical/thermal imagery from the HLUAV.

Winners of the R.J. Mitchell Trophy: Stellar Team comprising of Cranfield University, Selex Gailleo, Marshall SDG, TRW Conekt, Blue Bear Systems Research and Stellar Research.

2 results

2013

[breckon13autonomous] Multi-Modal Target Detection for Autonomous Wide Area Search and Surveillance (T.P. Breckon, A. Gaszczak, J. Han, M.L. Eichner, S.E. Barnes), In Proc. SPIE Emerging Technologies in Security and Defence: Unmanned Sensor Systems, SPIE, Volume 8899, No. 01, pp. 1-19, 2013.Keywords: autonomous robots, grand challenge, wide area search, search and rescue, UAV, infrared, thermal. [bibtex] [pdf] [doi] [demo]

2009

[wahren09uavgc]Development of a Two-Tier Unmanned Air System for the MoD Grand Challenge (K. Wahren, I. Cowling, Y. Patel, P. Smith, T.P. Breckon), In Proc. 24th Int. Conf. on Unmanned Air Vehicle Systems, pp. 13.1 - 13.9, 2009.Keywords: threat detection, sniper detection, MoD grand challenge, vehicle detection, thermal person detection, UAV aerial image classification, thermal image processing, path detection, robot guidance, terrain classification, road following. [bibtex] [demo]

Traversable Pathway Detection

Issue: Traversable surface determination and obstacle detection in unstructured environments for autonomous vehicle (robot) navigation.

Approach:Real-time extraction of image features using colour and texture segmentation in combination with temporal memory modelling initialised from an a priori "safe zone" against which the environment is compared to produce a traversability map of the immediate environment. In this resulting traversability map lighting and water artefacts are eliminated including shadows, reflections and water prints.

Application: We present a real-time approach for traversable surface detection using a low-cost monocular camera mounted on an autonomous vehicle.

The performance of this approach is extensively evaluated over varying terrain and environmental conditions. The results show a mean accuracy of 97% over this comprehensive test set (shown in video with non-traversable illustrated in red overlay onto the right hand version of the image).

1 result

2009

[katramados09travsurface] Real-Time Traversable Surface Detection by Colour Space Fusion and Temporal Analysis (I. Katramados, S. Crumpler, T.P. Breckon), In Proc. Int. Conf. on Computer Vision Systems, Springer, Volume 5815, pp. 265–274, 2009.Keywords: path detection, robot guidance, Traversable pathway, terrain classification, road following, robotic navigation. [bibtex] [pdf] [doi] [demo] [dataset] [poster]