Digital Media Net - Your Gateway To Digital media Creation. News and information on Digital Video, VR, Animation, Visual Effects, Mac Based media. Post Production, CAD, Sound and Music

Chung-Ang University Researchers Reveal MoBluRF: A Framework for Creating Sharp 4D Reconstructions from Blurry Videos

Researchers develop new framework that can create sharp neural radiance fields from blurry monocular videos

SEOUL, South Korea, Sept. 19, 2025 /PRNewswire/ — Neural Radiance Fields (NeRF) is a fascinating technique that creates three-dimensional (3D) scene representations from two-dimensional (2D) images captured from different angles. It trains a deep neural network to predict the color and density at any point in 3D space by casting imaginary light rays from the camera through each pixel in all input images, sampling points along those rays with their 3D coordinates and viewing direction. NeRF then reconstructs the scene in 3D and renders it from entirely new perspectives, a process called novel view synthesis (NVS).

NeRF can also work with videos, with each frame of the video treated as a static image. However, existing methods are highly sensitive to video quality. Monocular videos from phones or drones inevitably suffer from motion blur due to fast object motion or camera shake, making sharp, dynamic NVS difficult. This is because most existing deblurring-based NVS methods are designed for static multi-view images, which overlook global camera and local object motion. Furthermore, blurry videos often lead to inaccurate camera pose estimations and loss of geometric precision.

To address these issues, a research team jointly led by Assistant Professor Jihyong Oh from the Graduate School of Advanced Imaging Science (GSIAM) at Chung-Ang University (CAU), Korea, and Professor Munchurl Kim from Korea Advanced Institute of Science and Technology (KAIST), Korea, along with Mr. Minh-Quan Viet Bui, and Mr. Jongmin Park, developed MoBluRF, a two-stage motion deblurring method for NeRFs. “Our framework can reconstruct sharp 4D scenes, enabling NVS from blurry monocular videos using motion decomposition, while avoiding mask supervision, significantly advancing the NeRF field,” explains Dr. Oh. Their study was made available online on May 28, 2025, and was published in Volume 47, Issue 09 of IEEE Transactions on Pattern Analysis and Machine Intelligence on September 01, 2025.

MoBluRF consists of two main stages: Base Ray Initialization (BRI) and Motion Decomposition-based Deblurring (MDD). Existing deblurring-based NVS methods predict hidden sharp light rays in blurry images, called latent sharp rays, by transforming the base ray. However, directly using input rays in blurry images as base rays can lead to inaccurate prediction. BRI addresses this issue by roughly reconstructing dynamic 3D scenes from blurry videos and refining the initialization of “base rays” from imprecise camera rays.

Next, MDD stage uses these base rays to accurately predict latent sharp rays using Incremental Latent Sharp-rays Prediction (ILSP). ILSP incrementally decomposes motion blur into global camera motion and local object motion, improving the deblurring accuracy. MoBluRF also introduces two novel loss functions, one separating static and dynamic regions without motion masks, and another improving geometric accuracy of dynamic objects, areas where previous methods struggled.

Consequently, MoBluRF quantitatively and qualitatively outperforms state-of-the-art methods with significant margins in various datasets, while being robust against varying degrees of blur.

“By enabling deblurring and 3D reconstruction from casual handheld captures, our framework enables smartphones and other consumer devices to produce sharper and more immersive content,” remarks Dr. Oh. “It could also help create crisp 3D models of shaky footages from museums, improve scene understanding and safety for robots and drones, and reduce the need for specialized setups in virtual and augmented reality.”

MoBluRF enables high quality 3D reconstructions from ordinary blurry videos, marking a new direction for NeRFs.

Reference

Title of original paper:

MoBluRF: Motion Deblurring Neural Radiance Fields for Blurry Monocular Video

Journal:

IEEE Transactions on Pattern Analysis and Machine Intelligence

DOI:

10.1109/TPAMI.2025.3574644

About Chung-Ang University

Website: https://neweng.cau.ac.kr/index.do

Media Contact:

Sungki Shin
02-820-6614
401430@email4pr.com

View original content to download multimedia:https://www.prnewswire.com/news-releases/chung-ang-university-researchers-reveal-moblurf-a-framework-for-creating-sharp-4d-reconstructions-from-blurry-videos-302560565.html

SOURCE Chung-Ang University

Staff

Recent Posts

Dateline Resources Completes $35M Institutional Placement to Expand Drilling and Advance Colosseum Towards Production

$35M Placement Positions Dateline for Colosseum Development SAN BERNARDINO, CA / ACCESS Newswire / January…

5 hours ago

Horizon Aircraft to Participate in Upcoming Investor Conference and Industry Trade Show

TORONTO, ONTARIO / ACCESS Newswire / January 23, 2026 / New Horizon Aircraft Ltd. (NASDAQ:HOVR),…

5 hours ago

From NASA to Your Boardroom: Freelancer Opens 85-Million-Mind Innovation Engine to All Enterprises

The platform that slashed NASA's R&D costs and compressed three-day computations into one hour is…

5 hours ago

Global Voices Converge in Arizona for Inaugural SkyFire Environmental Film Festival

With submissions from around the world, and screenings across prominent Phoenix-area venues, the SkyFire Film…

6 hours ago

Pixee Medical accelerates U.S. growth with appointment of Pierre Couture as Vice President of Marketing

BESANÇON, France, Jan. 22, 2026 /PRNewswire/ -- Pixee Medical, a pioneer in Augmented Reality (AR)…

6 hours ago

JetBlue Celebrates Dominican Pride with ‘RD Orgullo que Eleva’ Campaign and First Livery Designed by Dominican Artists

JetBlue partners with three Dominican artists to design livery; voting is open now through February…

11 hours ago