(You may need to increase your computer screen brightness.. After this, you can see overall things)
The result after training with single views.

The result after approximately six hours of training with four views. As you see, It is not perfectly reconstructed but due to my resources, it is hard to train for more than 12 hours. So I need more time to train fully.
epoch 406 :

git clone this repo to /workspace
ctrl + c the whole code below and put them in terminal. Your local path must be /workspace/3d-vision. mipnerf360 will be saved in /workspace/mipnerf360.
cd /workspace/3d-vision
python3 -m venv venv
source venv/bin/activate
# requirements.txt
pip install numpy pycolmap deepspeed torch torchvision
pip install --upgrade --ignore-installed open3d
### colmap ###
git clone https://github.com/colmap/colmap /workspace/mipnerf360/colmap
# 의존성 설치 (MKL 제거, -y 추가)
apt-get update && apt-get install -y \
git cmake ninja-build build-essential \
libboost-program-options-dev libboost-graph-dev libboost-system-dev \
libeigen3-dev libfreeimage-dev libmetis-dev \
libgoogle-glog-dev libgtest-dev libgmock-dev \
libsqlite3-dev libglew-dev qtbase5-dev libqt5opengl5-dev \
libcgal-dev libceres-dev libcurl4-openssl-dev
# 빌드
cd /workspace/mipnerf360/colmap
mkdir -p build && cd build
cmake .. -GNinja -DCMAKE_BUILD_TYPE=Release -DCMAKE_CUDA_ARCHITECTURES=80
ninja -j$(nproc)
ninja install
## 추가 설치 ##
apt-get update
apt-get install -y locales
locale-gen en_US.UTF-8
update-locale LANG=en_US.UTF-8
apt-get install -y apt-utils
### nerf 데이터 다운로드 ###
# runpod에 unzip 패키지 없다
apt-get update && apt-get install -y unzip
root="/workspace/mipnerf360"; mkdir -p "$root" && cd "$root" && \
wget -c https://storage.googleapis.com/gresearch/refraw360/360_v2.zip -O 360_v2.zip && \
wget -c https://storage.googleapis.com/gresearch/refraw360/360_extra_scenes.zip -O 360_extra_scenes.zip && \
unzip -n 360_v2.zip && unzip -n 360_extra_scenes.zip && \
ls -1
###
chmod +x /workspace/3d-vision/colmap_run.sh
/workspace/3d-vision/colmap_run.sh
###
# single gpu
chmod +x /workspace/3d-vision/gs_run.sh
bash /workspace/3d-vision/gs_run.shprevious 3d gs was bad at reconstructing 3d with few views. So, I use flow matching traininig to warm up 3d gs. By using fm, gs easily get data feature. It was trained with only one A6000 gpu(48gb).
setting deepspeed, optimizing cuda
I was inspired by "FlowR: Flowing from Sparse to Dense 3D Reconstructions" and "3D Gaussian Splatting for Real-Time Radiance Field Rendering" and "FlowMatchingGuideandCode".
Apache License 2.0
mip-nerf360