Various research video demos with links to available open access manuscripts, open source software and datasets.
Issue: lossy compression can cause severe decline in performance of deep Convolution Neural Network (CNN) architectures even when mild compression is applied and the resulting compressed imagery is visually identical.
Approach: we quantitatively evaluates the affect that increasing levels of lossy compression has upon the performance of characteristically diverse object detection architectures with respect to varying sizes of objects present in the dataset.
Application: the impact of lossy compression is more extreme at higher compression levels (15, 10, 5) across all CNN architectures considered.
However, re-training the CNN models on lossy compressed imagery notably ameliorated performances for all CNN models with an average increment of ∼76% (at higher compression level 5). Additionally, we demonstrate the relative sensitivity of differing object sizes (tiny, small, medium, large) with respect to the compression level.2 results
|[bhowmik22compression]||Lost in Compression: the Impact of Lossy Image Compression on Variable Size Object Detection within Infrared Imagery , In Proc. Computer Vision and Pattern Recognition Workshops, IEEE, pp. 369-378, 2022. (Workshop on Perception Beyond the Visible Spectrum (to appear))Keywords: object detection, thermal, infrared, lossy image compression, compression artefacts.|
|[poyser20compression]||On the Impact of Lossy Image and Video Compression on the Performance of Deep Convolutional Neural Network Architectures , In Proc. Int. Conf. Pattern Recognition, IEEE, pp. 2830-2837, 2020.Keywords: data compression, jpeg, mpeg, compression artefacts, CNN, deep learning, lossy compression.|
Issue: non-temporal real-time fire detection via reduced complexity deep CNN architectures.
Approach: experimentally define reduced complexity deep convolutional neural network (CNN) architectures based on leading references architectures
Application: suited to embedded detection devices for edge computation.
Considers real-time performance for two discrete fire detection tasks:- full-frame fire detection and in-frame localization via super-pixels
Performance achieved upto 95% detection accuracy for full-frame fire detection and 97% accuracy for in-frame superpixel localisation with processing at 40 fps (full-frame) or 18 fps (localisation).3 results
|[thompson20fire]||Efficient and Compact Convolutional Neural Network Architectures for Non-temporal Real-time Fire Detection , In Proc. Int. Conf. Machine Learning Applications, IEEE, pp. 136-141, 2020.Keywords: fire detection, CNN, deep-learning real-time, neural architecture search, nas, automl, non-temporal.|
|[samarth19fire]||Experimental Exploration of Compact Convolutional Neural Network Architectures for Non-temporal Real-time Fire Detection , In Proc. Int. Conf. on Machine Learning Applications, IEEE, pp. 653-658, 2019.Keywords: fire detection, CNN, deep-learning real-time, non-temporal.|
|[dunnings18fire]||Experimentally Defined Convolutional Neural Network Architecture Variants for Non-temporal Real-time Fire Detection , In Proc. Int. Conf. on Image Processing, IEEE, pp. 1558-1562, 2018.Keywords: fire detection, CNN, deep-learning real-time, non-temporal.|
Issue: Real-time derivation of highly detailed visual saliency maps.
Approach:Division of Gaussians (DIVoG) comprising three distinct steps: (1) Bottom-up Gaussian pyramid construction, (2) Top-down Gaussian pyramid construction using (step 1), (3) Element-wise division of input image with (step 2).
Application: Real-time performance of DIVoG is 6x faster for 3-channel images and 16x faster for single channel images.
In contrast to other approaches DIVoG is not dependent on colour information (steps 1-3).
Overall this approach (DIVoG) produces high detail real-time saliency maps at a fraction of the computational cost of other methodologies in earlier work. A wide range of real-time applications can benefit from this approach including object detection and classification.2 results
|[breckon13patent]||Image Processing (Real-time Visual Saliency by Division of Gaussians) , Patent, Assignee: Cranfield University, WIPO, No. WO2013034878A2, 2013. (Filed: 2011-09-09, Published: 2013-03-14)|
|[katramados11salient]||Real-time Visual Saliency by Division of Gaussians , In Proc. Int. Conf. on Image Processing, IEEE, pp. 1741-1744, 2011.Keywords: salient, saliency, DoG.|
Issue: Prior real-time fire detection work relies upon a combination of basic colour spectroscopy and temporal information with no explicit use of texture features.
Approach: 1) Candidate fire region extraction via colour spectroscopy; followed by 2) per-region classification using colour-texture feature vector extraction targeted from earlier candidate region isolation.
Application:The approach is shown to operate successfully over a range of environmental and combustion conditions. Maximal performance of 98% detection is achieved at 12fps on (small) 360x288 resolution imagery.
The use of colour-texture features as an input to a trained classifier outperforms simple spectroscopy and is independent of temporal feature information. Two stage spectroscopy based region isolation and colour-texture feature classification limits required texture statistic computation facilitating their use within real-time bounds.1 result
|[chenebert11fire]||A Non-temporal Texture Driven Approach to Real-time Fire Detection , In Proc. Int. Conf. on Image Processing, IEEE, pp. 1781-1784, 2011.Keywords: fire detection, texture, real-time, non-temporal.|
Issue: The automatic detection and classification of biological cells within histological tissue sample imagery is an important step in the development of automated microscopy procedures for the support of medical diagnosis and research. This task is primarily related to high-throughput cell screening and analysis microscopy applications.b
Approach: our work has concentrated upon the cell detection problem in isolation where we employ a novel combination of Laplace-edge features and a Support Vector Machine (SVM) classifier to determine accurate and reliable cell detection.
Application: This feature-driven approach (trained on data at a single scale, single stain type) illustrates remarkable generalisation over a range of microscopy image scales and stain types (see above). The SVM is trained using cross-validation based grid search within the SVM parameter space with optimal results (~92% correct detection) achieved using an RBF type SVM kernel.1 result
|[han12cell]||The Application of Support Vector Machine Classification to Detect Cell Nuclei for Automated Microscopy , In Machine Vision and Applications, Springer, Volume 23, No. 1, pp. 15-24, 2012.Keywords: cell nuclei detection, automated microscopy, support vector machines.|