Various research video demos with links to available open access manuscripts, open source software and datasets.

U3DS$^3$: Unsupervised 3D Semantic Scene Segmentation

Issue: Contemporary point cloud segmentation approaches rely on supervised learnign requiring richly annotated 3D data which is both time-consuming and challenging to obtain.

Approach: We present U3DS$^3$ as a step towards completely unsupervised point cloud segmentation for any holistic 3D scenes.

Application: U3DS$^3$ leverages a generalized unsupervised segmentation method for both object and background across both indoor and out- door static 3D point clouds with no requirement for model pre-training, by leveraging only the inherent information of the point cloud to achieve full 3D scene segmentation.

The initial step of our proposed approach involves gener- ating superpoints based on the geometric characteristics of each scene. Subsequently, it undergoes a learning process through a spatial clustering-based methodology, followed by iterative training using pseudo-labels generated in ac- cordance with the cluster centroids together with dual invariance and equivariance representation learning.

1 result


[liu24U3DS] U3DS$^{3}$: Unsupervised 3D Semantic Scene Segmentation (J. Liu, Z. Yu, T.P. Breckon, H.P.H Shum), In Proc. Winter Conference on Applications of Computer Vision, IEEE, pp. 3759-3768, 2024.Keywords: scene segmentation, 3D point cloud, semantic scene understanding, 3D segmentation. [bibtex] [pdf] [arxiv] [demo] [poster]

Synthesising 3D Computed Tomography from 2D X-ray

Issue: Generating 3D images of complex objects conditionally from a few 2D views is a difficult synthesis problem, compounded by issues such as domain gap and geometric misalignment.

Approach: a simple and novel 2D to 3D synthesis approach based on conditional diffusion with vector-quantized codes.

Application: we generate the 3D codes for CT images conditional on previously generated 3D codes and the entire codebook of two 2D X-ray views.

Qualitative and quantitative results demonstrate state-ofthe- art performance over specialized methods across varied evaluation criteria, including fidelity metrics such as density and coverage and distortion metrics for two datasets of complex volumetric imagery found in real-world scenarios.

1 result


[corona23vqcdt] Unaligned 2D to 3D Translation with Conditional Vector-Quantized Code Diffusion using Transformers (A. Corona-Figueroa, S. Bond-Taylor, N. Bhowmik, Y.F.A. Gaus, T.P. Breckon, H.P.H. Shum, C.G. Willcocks), In Proc. Int. Conf. Computer Vision, IEEE/CVF, pp. 14585-14594, 2023.Keywords: X-ray to CT translation, baggage security, X-ray security. [bibtex] [pdf] [doi] [arxiv] [demo] [software] [poster] [more information]

Multiple Interactive Hand Recovery in 3D from Video

Issue: Reconstructing two hands from monocular RGB images is challenging due to frequent occlusion and mutual confusion.

Approach: our Attention Collaboration-based Regressor (ACR) makes the first attempt to reconstruct hands in arbitrary scenarios.

Application: our method significantly outperforms the best interacting-hand approaches while yielding comparable performance with the state-of-the-art single-hand methods.

Qualitative results on in-the-wild and hand-object interaction datasets and web images/videos further demonstrate the effectiveness of our approach for arbitrary hand reconstruction.

1 result


[yu23hands] ACR: Attention Collaboration-based Regressor for Arbitrary Two-Hand Reconstruction (Z. Yu, S. Haung, C. Fang, T.P. Breckon, J. Wang), In Proc. Computer Vision and Pattern Recognition, IEEE/CVF, pp. 12955-12964, 2023.Keywords: hand tracking, hand interaction, dual-hand, 3D hand tracking, 3D hand detection. [bibtex] [pdf] [doi] [arxiv] [demo] [software] [poster] [more information]

Exact-NeRF: Precise Volumetric Parameterization for Neural Radiance Fields

Issue: Neural Radiance Fields (NeRF) can synthesize novel scene views with great accuracy. However, inherent to their underlying formulation, the sampling of points along a ray with zero width may result in ambiguous representations that lead to rendering artifacts such as aliasing in the final scene.

Approach: we explore the use of an exact approach for calculating the Integrated Positional Encoding (IPE) by using a pyramid-based integral formulation instead of an approximated conical-based one.

Application: Our exploratory work illustrates that such an exact formulation Exact-NeRF provides a natural extension to more challenging scenarios without further modification, such as in the case of unbounded scenes.

Our contribution aims to both address the hitherto unexplored issues of frustum approximation in earlier NeRF work and additionally provide insight into the potential future consideration of analytical solutions in future NeRF extensions.

1 result


[isaac23exact] Exact-NeRF: An Exploration of a Precise Volumetric Parameterization for Neural Radiance Fields (B.K.S. Isaac-Medina, C.G. Willcocks, T.P. Breckon), In Proc. Computer Vision and Pattern Recognition, IEEE/CVF, pp. 66-75, 2023.Keywords: NeRF, neural scene representation, radiance fields, Exact-NeRF. [bibtex] [pdf] [doi] [arxiv] [demo] [software] [poster] [more information]

Depth Filling within Constrained RGB-D Image Completion

Issue: the problem of hole filling in depth images, obtained from either active or stereo sensing, for the purposes of depth image completion in an exemplar-based framework.

Approach: the proposed method, using both color (RGB) and depth (D) information available from a common-place RGB-D image, we explicitly modify the patch prioritization term utilized for target patch ordering to facilitate improved propagation of complex texture and linear structures within depth completion.

Application: Most existing exemplar-based inpainting techniques, designed for color image completion, do not perform well on depth information with object boundaries obstructed or surrounded by missing regions.

Furthermore, the query space in the source region is constrained to increase the efficiency of the approach compared to other exemplar-driven methods.

Evaluations demonstrate the efficacy of the proposed method compared to other contemporary completion techniques.

1 result


[abarghouei18patch] Extended Patch Prioritization For Depth Hole Filling Within Constrained Exemplar-Based RGB-D Image Completion (A. Atapour-Abarghouei, T.P. Breckon), In Proc. Int. Conf. Image Analysis and Recognition, Springer, pp. 306-314, 2018. (Best Paper Award)Keywords: depth filling, RGB-D, surface relief, hole filling, surface completion, 3D texture, depth completion, depth map, disparity hole filling. [bibtex] [pdf] [doi] [demo]

Fourier Based 3D Hole Filling in RGB-D Imagery

Issue: the problem of hole filling in RGB-D (color and depth) images, obtained from either active or stereo based sensing.

Approach: Depth completion performed independently on the low frequency depth information (surface shape) and the high frequency depth detail (relief) by way of a Fourier space transform and classical Butterworth high/low pass filtering. High frequency detail is then filled using a texture synthesis method, whilst the low frequency shape information is inpainted using structural inpainting.

Application: To improve the overall depth relief (D) and edge detail accuracy, color information (RGB) is also used to constrain the sampling process within high frequency component completion.

Experimental results demonstrate the efficacy of the proposed method outperforming prior work for generalized depth filling in the presence of high frequency surface relief detail.

1 result


[abarghouei16filling] Back to Butterworth - a Fourier Basis for 3D Surface Relief Hole Filling within RGB-D Imagery (A. Atapour-Abarghouei, G. Payen de La Garanderie, T.P. Breckon), In Proc. Int. Conf. on Pattern Recognition, IEEE, pp. 2813-2818, 2016.Keywords: depth filling, RGB-D, surface relief, Fourier, DFT, hole filling, surface completion, frequency domain, 3D texture, depth completion, query expansion, depth map, texture synthesis, disparity hole filling, Butterworth filtering. [bibtex] [pdf] [doi] [demo]

Cross-spectral Stereo Imaging

Issue: Robust scene depth recovery from cross-spectral (i.e. optical / thermal )stereo imagery.

Approach: Unsigned HOG descriptors are efficiently computed and L2 normalized. Pixel matching is then performed using L1 distance comparison.

Strong optimization approaches provide improved depth with DP and SGM providing usable results within reasonable computational bounds.

Application: Full scene depth recovery comparable in quality to that of standard optical stereo is recovered from the same scene.

This has significant applications in autonomous surveillance where dual optical thermal sensors are commonly deployed for a range of tasks to provide robust day/night sensing.

1 result


[pinggera12crossspectral] On Cross-Spectral Stereo Matching using Dense Gradient Features (P. Pinggera, T.P. Breckon, H. Bischof), In Proc. British Machine Vision Conference, BMVA, pp. 526.1-526.12, 2012.Keywords: stereo vision, thermal, multimodal stereo, thermal stereo, IR stereo, optical thermal stereo. [bibtex] [pdf] [doi] [demo] [poster]

3D Surface Completion

Issue: The main problem with real-world 3D capture is that common capture techniques are only 2½D in nature - such that back facing and occluded object portions cannot be realised from a single capture. We present a novel method of automated 3D completion to facilitate the full 3D realisation of a given 3D scene from a single 2.5D uni-directional capture.

Approach: Initial 3D shape fitting allows the recovery of the underlying surface geometry of the scene. Next an adaptation of non-parametric sampling from 2D image processing allows the realistic completion of localised 3D surface texture (relief) to produce an overall realistic automated completion. This allows the completion of 3D scenes whilst avoiding the need for costly and laborious multi-directional 3D capture.

Application: Realistic 3D surface completions, visually and statistically similar to the original, are achievable on both natural and man-made surfaces. Recent advances include the use of hierarchical surfaces to overcome the rear joining artifacts apparent in some examples.

Further advances in our 3D completion technique include an extension combining 3D surface shape and colour(tree bark and circuit board examples).

4 results


[breckon12completion] A Hierarchical Extension to 3D Non-parametric Surface Relief Completion (T.P. Breckon, R.B. Fisher), In Pattern Recognition, Elsevier, Volume 45, pp. 172-185, 2012.Keywords: relief completion, amodal completion, volume completion, visual completion, perceptual completion, surface completion. [bibtex] [pdf] [doi] [demo]


[breckon08completion] 3D Surface Relief Completion Via Non-parametric Techniques (T.P. Breckon, R.B. Fisher), In IEEE Transactions on Pattern Analysis and Machine Intelligence, IEEE, Volume 30, No. 12, pp. 2249 - 2255, 2008.Keywords: 3D relief completion, amodal completion, volume completion, visual completion, perceptual completion, visual propagation, 3D surface completion. [bibtex] [pdf] [doi] [demo]


[breckon05colour] Plausible 3D Colour Surface Completion using Non-parametric Techniques (T.P. Breckon, R.B. Fisher), In Proc. Mathematics of Surfaces XI, Springer-Verlag, Volume 3604, No. , pp. 102-120, 2005.Keywords: 3D colour completion, 3D colour synthesis, texture synthesis on surfaces, colour relief synthesis, displacement synthesis, context-based completion. [bibtex] [pdf] [doi] [demo]
[breckon05nonparametric] Non-parametric 3D Surface Completion (T.P. Breckon, R.B. Fisher), In Proc. Fifth Int. Conf. on 3D Digital Imaging and Modeling, IEEE, pp. 573-580, 2005.Keywords: visual completion, model completion, contextual completion, 3D completion, context-based completion, range data, 3D mesh, occlusion resolution, displacement mapping, 3D texture synthesis, geometric texture synthesis, 3D relief synthesis, surface completion. [bibtex] [pdf] [doi] [demo] [poster]

Adding 3D Mesh Detail from 2D Textures

Issue: Many common low-cost 3D capture approaches produce surface models with limited 3D detail despite the additional capture of a detailed 2D colour texture map.

Approach: To enhance the level of 3D surface relief detail using additional surface detail propagated from analysis of the corresponding 2D colour texture map.

Application: The resulting 2D colour texture map for the surface often contains considerably more localised surface relief detail than the actual underlying 3D shape mesh.

The key edge information present in the texture map can be extracted via a two stage process of (Laplacian of Gaussian - LoG) edge enhancement and subsequent edge detail extraction.

The resulting 2D edge information is then transposed as a displacement map onto the 3D surface resulting in increased 3D relief detail in the underlying 3D shape mesh.

1 result


[desile08meshdetail] 3D Colour Mesh Detail Enhancement Driven from 2D Texture Edge Information (Q. Breckon T.P. Desile), In Proc. 5th European Conference on Visual Media Production, IET, pp. SP-4, 2008.Keywords: geometric texture synthesis, texture transfer, 3D texture, surface relief, 3D mesh, surface texture, bump mapping by example, displacement mapping, displacement synthesis. [bibtex] [pdf] [doi] [demo] [poster]

Considering Video as a Volume

Issue: we can stack individual video frames to create create a volumetric representation akin to medical imaging (below, right).

Approach: In this approach we use techniques from medical imaging to create a novel visualisation of conventional video data.

Application: Combining colour mapping and varying opacity in relation to greyscale intensity facilitates the unique 3D visualisation. In this case of isolated exhaust gases correlated in space (x,y) and time (t) from the 1969 Apollo 11 launch (video).

If we consider a video of a conventional house fire we can create a true colour volume with added opacity to allow the visualisation of the true temporal acceleration of the smoke as it fills the room.

This technique shows a novel volumetric way to visualise complex temporal change in video sequences.

1 result


[flitton07volume] Considering Video as a Volume (G.T. Flitton, T.P. Breckon), In Proc. 4th European Conference on Visual Media Production, IET, pp. II-7, 2007.Keywords: volumetric video, 3D video, space-time video, spatio-temporal video. [bibtex] [pdf] [doi] [demo] [poster]

Very low cost Virtual Reality on standard PC hardware

Issue: Traditional VR approaches are beyond the reach of many potential users (e.g. schools) due to the prohibitive cost of the associated specialist equipment required (as of 2011).

Approach: By combining the established multi-view separation approach of anaglyph stereo with the available redundant graphics capacity in contemporary PC hardware we facilitate the real-time, interactive projective display of 3D content (i.e. VR display) without the use of any expensive specialist equipment (as of 2011).

Application: Despite the quality of state of the art systems a reasonable VR effect can still also achieved on a conventional PC display by computing both left and right stereo views in separate graphics rendering pipelines and combining them into a single display with spatial view separation carried out using red/green spectral colour filtering. The viewer can then use standard, low-cost (non-active) "red/green" glasses to view the resulting display enabling "VR on the cheap" to be realised.

1 result


[breckon11vr] Realizing Perceptive Virtual Reality Imaging Applications on Conventional PC Hardware (T.P. Breckon, K.W. Jenkins, P. Sonkoly), In Imaging Science Journal, Maney, Volume 59, No. 1, pp. 1-7, 2011.Keywords: virtual reality, projective image display, 3D anaglyph stereo, low cost. [bibtex] [pdf] [doi] [demo]

Geometric Texturing by Example

Issue: Realistic geometric texturing of high-frequency detail on 3D surfaces remains difficult and time-consuming.

Approach: Automatic extraction and transfer of 3D geometric texture from captured real-world surfaces to artificial surfaces using a combination of geometric surface fitting and 3D non-parametric sampling.

Application: Featureless surfaces can be readily generated from modern 3D authoring tools but generating realistic and detailed surface relief introduces a more difficult problem.

Increasingly available and accurate 3D capture tools such as laser range scanning and stereo camera rigs allow the rapid capture of 3D surface detail as part of a surface mesh representation.

Extending non-parametric sampling to 3D allows the iterative comparison of local vertex geometry which in turn facilitates growing consistent 3D texture out from an initial seed patch on the target surface following a marching front approach.

1 result


[breckon06transfer] Direct Geometric Texture Synthesis and Transfer on 3D Meshes (T.P. Breckon, R.B. Fisher), In Proc. 3rd European Conference on Visual Media Production, IET, pp. 186, 2006.Keywords: geometric texture synthesis, texture transfer, 3D texture, surface relief, 3D mesh, surface texture, bump mapping by example, displacement mapping, displacement synthesis. [bibtex] [pdf] [doi] [demo] [poster]