CVPR 2026

GP-4DGS: Probabilistic 4D Gaussian Splatting
from Monocular Video
with Variational Gaussian Processes

Mijeong Kim1 ·  Jungtaek Kim2 ·  Bohyung Han1
1ECE & IPAI, Seoul National University, Korea ·  2University of Wisconsin–Madison, USA
Probabilistic 4DGS Variational Gaussian Processes Uncertainty Quantification Temporal Extrapolation
arXiv Code Model
GP-4DGS Teaser

Figure 1. Unlike existing deterministic approaches, GP-4DGS enables robust uncertainty quantification, future motion prediction, and prior estimation for unobserved regions.

01

Abstract

We present GP-4DGS, a novel framework that integrates Gaussian Processes (GPs) into 4D Gaussian Splatting (4DGS) for principled probabilistic modeling of dynamic scenes. While existing 4DGS methods focus on deterministic reconstruction, they are inherently limited in capturing motion ambiguity and lack mechanisms to assess prediction reliability.

By leveraging the kernel-based probabilistic nature of GPs, our approach introduces three key capabilities: (i) uncertainty quantification for motion predictions, (ii) motion estimation for unobserved or sparsely sampled regions, and (iii) temporal extrapolation beyond observed training frames. To scale GPs to the large number of Gaussian primitives in 4DGS, we design spatio-temporal kernels that capture the correlation structure of deformation fields and adopt variational Gaussian Processes with inducing points for tractable inference. Our experiments show that GP-4DGS enhances reconstruction quality while providing reliable uncertainty estimates that effectively identify regions of high motion ambiguity.

02

Key Contributions

🔬
First Probabilistic 4DGS Framework
We introduce GP-4DGS, the first method to integrate Gaussian Processes into 4D Gaussian Splatting, bringing principled probabilistic motion modeling to dynamic scene reconstruction.
Three New Capabilities for 4DGS
Our framework unlocks three capabilities entirely absent in existing deterministic 4DGS approaches.
🎯
Uncertainty Quantification
Identifies regions of high motion ambiguity via GP variance maps.
🚀
Future Motion Prediction
Forecasts motion beyond training frames using the periodic temporal kernel.
🌍
Unobserved Region Prior
Propagates motion from well-observed to sparse or occluded primitives.
📈
Improved Reconstruction Quality
GP-4DGS consistently outperforms state-of-the-art baselines on the DyCheck benchmark, with gains especially pronounced in sparsely observed and challenging scenes.
03

Method

1
Composite Spatio-temporal Kernel Core Design
We sum a spatial Matérn kernel — capturing geometric smoothness among nearby primitives — with a per-axis periodic temporal kernel for cyclic motion patterns. Matérn is chosen over RBF to handle discontinuities between spatially separate objects, enabling more faithful modeling of real-world dynamics.
2
Variational Inference with Inducing Points Scalability
Exact GP inference is O(N³), intractable for tens of thousands of primitives. We use M inducing points (M ≪ N) initialized via Chronos-based trajectory clustering, reducing complexity to O(NM² + M³) during training and O(M) per query at inference time.
3
GP-GS Alternating Optimization Training
Stage 1 trains the GP on high-confidence primitives selected by their cumulative rendering contribution. Stage 2 uses the cached GP posterior mean as a guidance signal to regularize 4DGS, with an annealed loss threshold that tightens as both representations converge.
04

Experimental Results

Reconstruction Quality — DyCheck Benchmark

Metrics: mPSNR ↑, mSSIM ↑, mLPIPS ↓. GP-4DGS consistently achieves superior results, with the largest gains on the Challenging subset (reduced viewpoint overlap).

SplitMethod mPSNR ↑mSSIM ↑mLPIPS ↓
All Scenes (7)
AllGaussian Marbles15.840.540.57
AllSoM17.090.650.39
AllGP-4DGS (ours)17.380.650.37
SoM 5 Scenes
SoM 5SC-GS14.130.480.49
SoM 5D-3DGS11.920.490.66
SoM 54DGS13.420.490.56
SoM 5HyperNeRF15.990.590.51
SoM 5SoM16.730.640.43
SoM 5GP-4DGS (ours)16.920.660.41
Challenging Subset (reduced viewpoint overlap)
ChallengingGaussian Marbles14.050.400.61
ChallengingSoM14.560.460.53
ChallengingGP-4DGS (ours)15.020.460.51

Table 1. GP-4DGS surpasses all baselines across every split. The performance gap widens on the Challenging subset, demonstrating robustness under sparse observations.


Future Motion Extrapolation

PSNR ↑ evaluated on the last 5 and 15 frames held out from training. GP-4DGS dramatically outperforms naïve linear extrapolation, especially for periodic motion.

Method Periodic Motion Non-periodic Motion
5 frames15 frames 5 frames15 frames
Linear extrapolation11.558.1115.0211.92
GP-4DGS (ours)17.6216.6515.2713.22

Table 2. The periodic temporal kernel captures cyclic structure effectively, yielding a large PSNR gain (17.62 vs. 11.55 at 5 frames) over linear extrapolation.

05

BibTeX

@inproceedings{kim2026gp4dgs, author = {Kim, Mijeong and Kim, Jungtaek and Han, Bohyung}, title = {GP-4DGS: Probabilistic 4D Gaussian Splatting from Monocular Video via Variational Gaussian Processes}, booktitle = {CVPR}, year = {2026} }