Various research video demos with links to available open access manuscripts, open source software and datasets.

Unmanned Aerial Vehicle Visual Detection and Tracking

Issue: the automated detection and tracking of UAV is a fundamental task in aerial security systems.

Approach: Common technologies for UAV detection include visible- band and thermal infrared imaging, radio frequency and radar. Recent advances in deep neural networks (DNN) for image-based object detection open the possibility to use visual information for this detection and tracking task.

Application: These detection architectures can be implemented as backbones for visual tracking systems, thereby enabling persistent tracking of UAV incursions.

To date, no comprehensive performance benchmark exists that applies DNN to visible-band imagery for UAV detection and tracking. To this end, three datasets with varied environmental conditions for UAV detection and tracking, comprising a total of 241 videos (331,486 images), are assessed using four detection architectures and three tracking frame works.

The best performing detector architecture obtains an mAP of 98.6% and the best performing tracking framework obtains a MOTA of 98.7%. Cross-modality evaluation is carried out between visible and infrared spectrum, achieving a maximal 82.8% mAP on visible images when training in the infrared modality.

These results provide the first public multi-approach benchmark for state-of-the-art deep learning-based methods and give insight into which detection and tracking architectures are effective in the UAV domain.

2 results

2022

[organisciak22uav-reid] UAV-ReID: A Benchmark on Unmanned Aerial Vehicle Re-Identification in Video Imagery (D. Organisciak, M. Poyser, A. Alsehaim, B.K.S. Isaac-Medina, S. Hu, T.P. Breckon, H.P.H. Shum), In Proc. Int. Conf. on Computer Vision Theory and Applications, IEEE, 2022. (to appear)Keywords: drone detection, aerial reidentification, Re-ID, UAV, UAS, tracking. [bibtex] [pdf] [software] [more information]

2021

[isaac21uav] Unmanned Aerial Vehicle Visual Detection and Tracking using Deep Neural Networks: A Performance Benchmark (B.K.S. Isaac-Medina, M. Poyser, D. Organisciak, C.G. Willcocks, T.P. Breckon, H.P.H. Shum), In Proc. Int. Conf. on Computer Vision Workshops, IEEE, pp. 1223-1232, 2021. (Workshop on Detection and Tracking of Unmanned Aerial Vehicle in the Wild)Keywords: drone detection, uav detection, unmanned aerial vehicles, aerial object detection, deep learning, convolutional neural networks, object detection, small object detection, tracking, thermal, infrared. [bibtex] [pdf] [doi] [arxiv] [demo] [software] [more information]

Salient Object Detection in Aerial Video from Drones

Issue: effective drone deployment for search and rescue tasks involves the manual review of large amounts of video imagery for generic objects of interest.

Approach: we propose a deep model using a visual saliency approach to automatically analyse and detect anomalies in aerial surveillance video.

Application: Unmanned Aerial Vehicles (UAV, "drones") can be used to great effect for wide-area searches such as search and rescue operations. UAV enable search and rescue teams to cover large areas more effectively and efficiently.

Our current Temporal Contextual Saliency (TeCS) model is based on the state-of-the-art in visual saliency detection using deep convolutional neural networks (CNN), with temporal information retained via a convolutional Long Short-term Memory (LSTM) layer, in order to consider both local and scene context.

Our model achieves improved results on a benchmark dataset with the addition of temporal reasoning showing significantly improved results compared to the state-of-the-art in saliency detection and our earlier work in the field.

2 results

2021

[gokstorp21saliency] Temporal and Non-Temporal Contextual Saliency Analysis for Generalized Wide-Area Search within Unmanned Aerial Vehicle (UAV) Video (S. Gökstorp, T.P. Breckon), In The Visual Computer, Springer, 2021. (to appear)Keywords: UAV, drone, saliency, search and rescue, SOR operations, wide-area search, video saliency. [bibtex] [pdf] [doi] [demo] [software]

2010

[sokalski10uavsalient] Automatic Salient Object Detection in UAV Imagery (J. Sokalski, T.P. Breckon, I. Cowling), In Proc. 25th Int. Conf. on Unmanned Air Vehicle Systems, pp. 11.1-11.12, 2010.Keywords: salient objects, image saliency, salient object search, UAV. [bibtex] [pdf] [poster]

Large-Scale Swarming Drone Operability

Issue: to explore the technical feasibility and military utility of a swarm of up to 20 small UAV operating collaboratively.

Approach: Following 2 earlier phases, a £2.5 million contract was awarded to an industry team led by Blue Bear Systems Research including Plextek DTS, IQHQ, Airbus and Durham University as the culmination of the Defence Science and Technology Laboratory (DSTL) "Many Drones Make Light Work" competition.

Application: A swarm of 20 drones completed the largest collaborative, military focused evaluation of swarming uncrewed aerial vehicles (UAV) in the UK.

The swarm consisted of 5 different types and sizes of fixed wing drones, with different operational capabilities, together with 6 different payload types, flying representative tasks at RAF Spadeadam in Cumbria.Three operators in the Blue Bear Mobile Command and Control System (MCCS) managed the entire swarm whilst simultaneously handling different, collaborative payload analysis tasks.

The UAV flew simultaneous Beyond Visual Line Of Sight (BVLOS) cooperative tasks, with Blue Bear collaborative autonomy ensuring they all contributed to overall mission goals. Throughout the 2 weeks of trials, more than 220 sorties were undertaken with Durham University providing payload support based on related prior work in the field.
[Text acknowledgment: DSTL Press Release - Published: 28 January 2021].

4 results

2020

[gaus20transfer] Visible to Infrared Transfer Learning as a Paradigm for Accessible Real-time Object Detection and Classification in Infrared Imagery (Y.F.A. Gaus, N. Bhowmik, B.K.S. Isaac-Medina, T.P. Breckon), In Proc. Conf. Counterterrorism, Crime Fighting, Forensics, and Surveillance Technologies, SPIE, Volume 11542, pp. 13-27, 2020.Keywords: far infrared, transfer learning, thermal imaging, people detection, vehicle detection. [bibtex] [pdf] [doi] [demo]

2016

[kundegorski16vehicle] Real-time Classification of Vehicle Types within Infra-red Imagery (M.E. Kundegorski, S. Akcay, G. Payen de La Garanderie, T.P. Breckon), In Proc. SPIE Optics and Photonics for Counterterrorism, Crime Fighting and Defence, SPIE, Volume 9995, pp. 1-16, 2016.Keywords: vehicle sub-category classification, thermal target tracking, bag of visual words, histogram of oriented gradient, convolutional neural network, sensor networks, passive target positioning, vehicle localization. [bibtex] [pdf] [doi] [demo]

2013

[breckon13autonomous] Multi-Modal Target Detection for Autonomous Wide Area Search and Surveillance (T.P. Breckon, A. Gaszczak, J. Han, M.L. Eichner, S.E. Barnes), In Proc. SPIE Emerging Technologies in Security and Defence: Unmanned Sensor Systems, SPIE, Volume 8899, No. 01, pp. 1-19, 2013.Keywords: autonomous robots, grand challenge, wide area search, search and rescue, UAV, infrared, thermal. [bibtex] [pdf] [doi] [demo]

2011

[gaszczak11uavpeople] Real-time People and Vehicle Detection from UAV Imagery (A. Gaszczak, T.P. Breckon, J. Han), In Proc. SPIE Conference Intelligent Robots and Computer Vision XXVIII: Algorithms and Techniques, Volume 7878, No. 78780B, 2011.Keywords: UAV image analysis, people detection, aerial image analysis, infrared, thermal. [bibtex] [pdf] [doi] [demo] [poster]

Autonomous End-to-End Drone Flight in Unstructured Environments

Issue: Increased growth in the global Unmanned Aerial Vehicles (UAV) (drone) usage has expanded applications in in wide area search and surveillance operations in unstructured outdoor environments within fully autonomous flight is challenging.

Approach:we propose an end-to-end multi-task regression-based learning approach capable of defining flight commands for navigation and exploration in unstructured environments based solely on closed-loop visual guidance from an onboard camera, regardless of the presence of structured scene features or additional sensors (i.e. GPS).

Application: Evaluation is performed using a software in the loop pipeline which allows for a detailed evaluation against state-of-the-art techniques.

Extensive experiments demonstrate that this approach excels in performing dense exploration within the required search perimeter, is capable of covering wider search regions, generalises to previously unseen and unexplored environments.

1 result

2019

[pearson19multi-task] Multi-Task Regression-based Learning for Autonomous Unmanned Aerial Vehicle Flight Control within Unstructured Outdoor Environments (B.G. Maciel-Pearson, S. Akcay, A. Atapour-Abarghouei, C. Holder, T.P. Breckon), In Robotics and Automation Letters, IEEE, Volume 4, No. 4, pp. 4116-4123, 2019.Keywords: autonomous flight, deep learning, drones, regressive flight control, machine learning flight controller, simulation. [bibtex] [pdf] [doi] [arxiv] [demo] [software] [dataset]

Deep Neural Network Trail Navigation for Drones

Issue: Autonomous flight within a forest canopy represents a key challenge for generalised scene understanding on-board a future Unmanned Aerial Vehicle (UAV) platform.

Approach:this work presents an optimised deep neural network architecture, capable of state-of-the-art performance across varying resolution aerial UAV imagery, that improves forest trail detection for UAV guidance even when using significantly within low resolution images that are representative of low-cost search and rescue capable UAV platforms.

Application: we present an approach for automatic trail navigation within such an unstructured environment that successfully generalises across differing image resolutions - allowingUAV with varying sensor payload capabilities to operate equally in such challenging environmental conditions.

1 result

2018

[pearson18forest] Extending Deep Neural Network Trail Navigation for Unmanned Aerial Vehicle Operation within the Forest Canopy (B.G. Maciel-Pearson, P. Carbonneau, T.P. Breckon), In Proc. Towards Autonomous Robotic Systems Conference, Springer, pp. 147-158, 2018.Keywords: drone, deep learning, convolutional neural network, robot guidance, flight guidance, unmanned aerial vehicle, unmanned aerial system, monocular, pathway detection. [bibtex] [pdf] [doi] [demo] [dataset]

Autonomous Drone Flight in Cluttered Environments

Issue: Autonomous flight within a forest canopy represents a key challenge for generalised scene understanding on-board a future Unmanned Aerial Vehicle (UAV) platform.

Approach: We presents an optimised deep neural network architecture, capable of state- of-the-art performance across varying resolution aerial UAV imagery, that provides forest trail detection for UAV guidance even when using significantly low resolution images that are representative of low-cost search and rescue capable UAV platforms.

Application: Our approach successfully generalises across differing image resolutions - allowing UAV with varying sensor payload capabilities to operate equally in such challenging environmental conditions.

1 result

2017

[pearson17forest] An Optimised Deep Neural Network Approach for Forest Trail Navigation for UAV Operation within the Forest Canopy (B.G. Maciel-Pearson, T.P. Breckon), In Proc. Conf. on Robotics and Autonomous Systems - Robots that Work Among Us Workshop, UK Robotics and Autonomous Systems Network, pp. 1-3, 2017.Keywords: drone, deep learning, convolutional neural network, robot guidance, flight guidance, unmanned aerial vehicle, unmanned aerial system, monocular, pathway detection. [bibtex] [pdf] [demo] [poster] [more information]

Object Detection in Aerial UAV Imagery

Issue: UAV (drones) are increasingly being investigated for search, rescue and surveillance operations both on land and at sea. However, a key remaining problem is the manual analysis of the resulting video footage transmitted back from the UAV platform for "objects of interest" in the search operation.

Approach: Here we present a novel approach for the real-time automatic detection of people in thermal imagery and vehicles in colour imagery based on using multiple trained classifiers under varying conditions in both isolated rural and cluttered urban environments with minimal false positive detection.

Application: Automated detection use allows the autonomous search coverage of large search areas using multiple platforms with minimal operator intervention as a set of remote eyes capable of searching a given area from above.

3 results

2013

[breckon13autonomous] Multi-Modal Target Detection for Autonomous Wide Area Search and Surveillance (T.P. Breckon, A. Gaszczak, J. Han, M.L. Eichner, S.E. Barnes), In Proc. SPIE Emerging Technologies in Security and Defence: Unmanned Sensor Systems, SPIE, Volume 8899, No. 01, pp. 1-19, 2013.Keywords: autonomous robots, grand challenge, wide area search, search and rescue, UAV, infrared, thermal. [bibtex] [pdf] [doi] [demo]

2011

[gaszczak11uavpeople] Real-time People and Vehicle Detection from UAV Imagery (A. Gaszczak, T.P. Breckon, J. Han), In Proc. SPIE Conference Intelligent Robots and Computer Vision XXVIII: Algorithms and Techniques, Volume 7878, No. 78780B, 2011.Keywords: UAV image analysis, people detection, aerial image analysis, infrared, thermal. [bibtex] [pdf] [doi] [demo] [poster]

2009

[breckon09uavvehicles] Autonomous Real-time Vehicle Detection from a Medium-Level UAV (T.P. Breckon, S.E. Barnes, M.L. Eichner, K. Wahren), In Proc. 24th Int. Conf. on Unmanned Air Vehicle Systems, pp. 29.1-29.9, 2009.Keywords: vehicle detection, UAV image analysis. [bibtex] [pdf] [demo]

Real-time Video Mosaicking

Issue: Improving situational awareness of the viewer by addressing aperture limitations, level of scene detail and information contextualization.

Approach: Feature point based image alignment combining a state of the art feature point detector with a robust statistical selection methodology.

Application:Image alignment performed by on-line frame-wise and global bundle adjustment supported by hardware accelerated visualization with quality enhancements and explicit task parallelism on modern CPU hardware.

Mosaic constructed solely from the input video with no additional camera meta-data.

Performance supported by novel use frame sieve to avoid high data redundancy and realisation of real-time inter-frame blending.

2 results

2015

[breszcz15mosaic] Real-time Construction and Visualization of Drift-Free Video Mosaics from Unconstrained Camera Motion (M. Breszcz, T.P. Breckon), In IET J. Engineering, IET, Volume 2015, No. 16, pp. 1-12, 2015.Keywords: mosaic, mosaicking, mosiacing, real-time, visualization, graphics acceleration, blending. [bibtex] [pdf] [doi] [demo] [poster]

2011

[breszcz11uavmosaic] Real-time Mosaicing from Unconstrained Video Imagery for UAV Applications (M. Breszcz, T.P. Breckon, I. Cowling), In Proc. 26th Int. Conf. on Unmanned Air Vehicle Systems, pp. 32.1-32.8, 2011.Keywords: mosaic, mosaicking, mosiacing, real-time, visualization, UAV, in-flight. [bibtex] [pdf] [demo]