To reduce storage and computational costs, 3D Gaussian splatting (3DGS) seeks to minimize the number of Gaussians used while preserving high rendering quality, introducing an inherent trade-off between Gaussian quantity and rendering quality. Existing methods strive for better quantity–quality performance but lack the ability for users to intuitively adjust this trade-off to suit deployment under diverse hardware and communication constraints. Here, we present ControlGS, a 3DGS optimization method that achieves semantically meaningful and cross-scene consistent quantity–quality control. Through a single training run using a fixed setup and a user-specified hyperparameter reflecting quantity–quality preference, ControlGS automatically finds desirable trade-off points across scenes—from compact objects to large outdoor environments—outperforms baselines by achieving higher rendering quality with fewer Gaussians, and supports stepless control over the trade-off.
Illustration of our proposed ControlGS:
Starting from a sparse point cloud reconstructed via SfM, we initialize an anisotropic Gaussian set and alternate between uniform octree-style subdivision and sparsity-driven pruning.
The controllable core of ControlGS is a single hyperparameter, 𝜆𝛼, which scales the atrophy loss:
By training only once under a fixed setup and adjusting 𝜆𝛼, ControlGS enable a consistent, stepless, and linear trade-off control between Gaussian quantity and rendering quality across diverse scenes, facilitating the efficient generation of multiple model variants tailored to diverse deployment needs.
ControlGS achieves smooth, stepless, and predictable control over the trade-off between rendering quality and Gaussian quantity across diverse scenes, including high-fidelity reconstructions and highly compressed models, and significantly outperforms baseline methods in control consistency, range, and precision.
Compared to existing methods, ControlGS achieves higher rendering quality with fewer Gaussians on unseen test views, consistently preserving intricate structures and high-frequency textures across diverse scenes.
We are building an interactive demo system that will automatically detect your device’s performance and display 3DGS models from various scenes, each optimized for the best quantity–quality balance on your device.
You will also be able to manually select different trade-off points to explore how different scenes perform under varying quantity–quality settings.
This demo is currently under development and will be available soon.