-
Notifications
You must be signed in to change notification settings - Fork 24
Description
Dear authors,
Thank you for the impressive work on ReconX — it’s a fascinating and promising approach for sparse-view 3D scene reconstruction.
I have a quick question regarding the training setup:
In the paper, you mention that the video diffusion model was trained on 3D scene datasets (RealEstate-10K, ACID, and DL3DV-10K). It seems that these datasets were used in combination for a unified training process (rather than separate training on each dataset).
Would it be possible for you to provide the combined version or the preprocessing script used to merge and sample from these datasets for training?
Having access to the same setup would be incredibly helpful for reproducing the results and for further research on cross-dataset generalization.
Thank you in advance, and looking forward to your open-source release!