Photon Splatting

A Physics-Guided Neural Surrogate for Real-Time Wireless Channel Prediction

1UIUC, 2University of Surrey

Photon Splatting accepts multiple user devices (~900 Rx) simultaneously as input: When Tx (The UAV) and Rx (The bus) are relocated, it still provides the accurate prediction results without re-traing, at a real-time speed (~29 FPS).

Abstract

We present Photon Splatting, a physics-guided neural surrogate model for real-time wireless channel prediction in complex environments. The proposed framework introduces surface-attached virtual sources, referred to as photons, which carry directional wave signatures informed by the scene geometry and transmitter configuration. At runtime, channel impulse responses (CIRs) are predicted by splatting these photons onto the angular domain of the receiver using a geodesic rasterizer. The model is trained to learn a physically grounded representation that maps transmitter-receiver configurations to full channel responses. Once trained, it generalizes to new transmitter positions, antenna beam patterns, and mobile receivers without requiring model retraining. We demonstrate the effectiveness of the framework through a series of experiments, from canonical 3D scenes to a complex indoor cafe with 1,000 receivers. Results show 30 millisecond-level inference latency and accurate CIR predictions across a wide range of configurations. The approach supports real-time adaptability and interpretability, making it a promising candidate for wireless digital twin platforms and future 6G network planning.

Pipeline

Overview of Photon Splatting. Surface-attached photons are constructed from scene geometry and learned via a neural model. At runtime, the system predicts wave signatures and aggregates angular contributions through spherical splatting to compute the CIR.

Pipeline

Features

More than power map

Photon Splatting predicts the full Channel State Information (CSI), not just a radiance power. In the figures below, we set a Tx in 3D space, and let a pedestrian walk surrounding a building, serving as multiple Rx in different frames. The x-axis is the time-of-flight of each received signal per Rx, y-axis is the frame index of walking pedestrain, and z-axis is the amplitude of complex gain of each received signal per Rx:

GT
A pedestrain walk surrounding the building, left many frames been captured.
GT
GT from ray-tracing
Ours
Ours from Photon-Splatting

Visualize the photons

You might be curious that how does photon-splatting learn the wireless propagation in detail. That's why we provide a visualization of our learned photons here: The color of each photon denotes the weight of each signal path, which is the inverse of their time-of-flight. It is clear that all the possible interactions were consistent with the ground truth fro ray-tracing.

GT
GT from ray-tracing.
GT
The learned photons, all the interactions are corresponding to the GT from ray-tracing.

Trajectory planning

Photon Splatting can guide a robot to re-plan it's trajectory in real-time. At every frame, it follows the ray direction with the lowest time-of-flight, and finally approach to the target (Tx) location.

GT
Angles-Of-Arrivals: 3 frame, GT from ray-tracing
Ours
Angles-Of-Arrivals: 3 frame, Ours from Photon-Splatting
GT
The full trajectory of robot
Ours
Seeing the trajectory from Target side

BibTeX

If you find this project helpful to your research, please consider citing:
@misc{cao2025photonsplatting,
    title={Photon Splatting: A Physics-Guided Neural Surrogate for Real-Time Wireless Channel Prediction}, 
    author={Ge Cao and Gabriele Gradoni and Zhen Peng},
    year={2025},
    eprint={2507.04595},
    archivePrefix={arXiv},
    primaryClass={cs.LG},
    url={https://arxiv.org/abs/2507.04595}, 
}