Skip to content

Commit

Permalink
Merge pull request #79 from pierotofy/compress
Browse files Browse the repository at this point in the history
Add .splat compression output support
  • Loading branch information
pierotofy authored Apr 19, 2024
2 parents 9275c62 + 12e8169 commit ef0461f
Show file tree
Hide file tree
Showing 5 changed files with 66 additions and 8 deletions.
13 changes: 11 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ A free and open source implementation of 3D [gaussian splatting](https://www.you
<img src="https://github.com/pierotofy/OpenSplat/assets/1951843/c9327c7c-31ad-402d-a5a5-04f7602ca5f5" width="49%" />
<img src="https://github.com/pierotofy/OpenSplat/assets/1951843/eba4ae75-2c88-4c9e-a66b-608b574d085f" width="49%" />

OpenSplat takes camera poses + sparse points in [COLMAP](https://colmap.github.io/), [OpenSfM](https://github.com/mapillary/OpenSfM), [ODM](https://github.com/OpenDroneMap/ODM) or [nerfstudio](https://docs.nerf.studio/quickstart/custom_dataset.html) project format and computes a [scene file](https://drive.google.com/file/d/12lmvVWpFlFPL6nxl2e2d-4u4a31RCSKT/view?usp=sharing) (.ply) that can be later imported for [viewing](https://antimatter15.com/splat/?url=https://splat.uav4geo.com/banana.splat), editing and rendering in other [software](https://github.com/MrNeRF/awesome-3D-gaussian-splatting?tab=readme-ov-file#open-source-implementations).
OpenSplat takes camera poses + sparse points in [COLMAP](https://colmap.github.io/), [OpenSfM](https://github.com/mapillary/OpenSfM), [ODM](https://github.com/OpenDroneMap/ODM) or [nerfstudio](https://docs.nerf.studio/quickstart/custom_dataset.html) project format and computes a [scene file](https://drive.google.com/file/d/12lmvVWpFlFPL6nxl2e2d-4u4a31RCSKT/view?usp=sharing) (.ply or .splat) that can be later imported for [viewing](https://antimatter15.com/splat/?url=https://splat.uav4geo.com/banana.splat), editing and rendering in other [software](https://github.com/MrNeRF/awesome-3D-gaussian-splatting?tab=readme-ov-file#open-source-implementations).

Graphics card recommended, but not required! OpenSplat runs the fastest on NVIDIA, AMD and Apple (Metal) GPUs, but can also run entirely on the CPU (~100x slower).

Expand Down Expand Up @@ -223,6 +223,16 @@ There's several parameters you can tune. To view the full list:
./opensplat --help
```
### Compression
To generate compressed splats (.splat files), use the `-o` option:
```bash
./opensplat /path/to/banana -o banana.splat
```
### AMD GPU Notes
To train a model with AMD GPU using docker container, you can use the following command as a reference:
1. Launch the docker container with the following command:
```bash
Expand All @@ -243,7 +253,6 @@ We recently released OpenSplat, so there's lots of work to do.
* Improve speed / reduce memory usage
* Distributed computation using multiple machines
* Real-time training viewer output
* Compressed scene outputs
* Automatic filtering
* Your ideas?

Expand Down
51 changes: 49 additions & 2 deletions model.cpp
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
#include <filesystem>
#include "model.hpp"
#include "constants.hpp"
#include "tile_bounds.hpp"
Expand All @@ -12,6 +13,8 @@
#include <c10/cuda/CUDACachingAllocator.h>
#endif

namespace fs = std::filesystem;

torch::Tensor randomQuatTensor(long long n){
torch::Tensor u = torch::rand(n);
torch::Tensor v = torch::rand(n);
Expand Down Expand Up @@ -458,7 +461,16 @@ void Model::afterTrain(int step){
}
}

void Model::savePlySplat(const std::string &filename){
void Model::save(const std::string &filename){
if (fs::path(filename).extension().string() == ".splat"){
saveSplat(filename);
}else{
savePly(filename);
}
std::cout << "Wrote " << filename << std::endl;
}

void Model::savePly(const std::string &filename){
std::ofstream o(filename, std::ios::binary);
int numPoints = means.size(0);

Expand Down Expand Up @@ -515,7 +527,42 @@ void Model::savePlySplat(const std::string &filename){
}

o.close();
std::cout << "Wrote " << filename << std::endl;
}

void Model::saveSplat(const std::string &filename){
std::ofstream o(filename, std::ios::binary);
int numPoints = means.size(0);

torch::Tensor meansCpu = keepCrs ? (means.cpu() / scale) + translation : means.cpu();
torch::Tensor scalesCpu = keepCrs ? (torch::exp(scales.cpu()) / scale) : torch::exp(scales.cpu());
torch::Tensor rgbsCpu = (sh2rgb(featuresDc.cpu()) * 255.0f).toType(torch::kUInt8);
torch::Tensor opac = (1.0f + torch::exp(-opacities.cpu()));
torch::Tensor opacitiesCpu = torch::clamp(((1.0f / opac) * 255.0f), 0.0f, 255.0f).toType(torch::kUInt8);
torch::Tensor quatsCpu = torch::clamp(quats.cpu() * 128.0f + 128.0f, 0.0f, 255.0f).toType(torch::kUInt8);

std::vector< size_t > splatIndices( numPoints );
std::iota( splatIndices.begin(), splatIndices.end(), 0 );
torch::Tensor order = (scalesCpu.index({"...", 0}) +
scalesCpu.index({"...", 1}) +
scalesCpu.index({"...", 2})) /
opac.index({"...", 0});
float *orderPtr = reinterpret_cast<float *>(order.data_ptr());

std::sort(splatIndices.begin(), splatIndices.end(),
[&orderPtr](size_t const &a, size_t const &b) {
return orderPtr[a] > orderPtr[b];
});

for (int i = 0; i < numPoints; i++){
size_t idx = splatIndices[i];

o.write(reinterpret_cast<const char *>(meansCpu[idx].data_ptr()), sizeof(float) * 3);
o.write(reinterpret_cast<const char *>(scalesCpu[idx].data_ptr()), sizeof(float) * 3);
o.write(reinterpret_cast<const char *>(rgbsCpu[idx].data_ptr()), sizeof(uint8_t) * 3);
o.write(reinterpret_cast<const char *>(opacitiesCpu[idx].data_ptr()), sizeof(uint8_t) * 1);
o.write(reinterpret_cast<const char *>(quatsCpu[idx].data_ptr()), sizeof(uint8_t) * 4);
}
o.close();
}

void Model::saveDebugPly(const std::string &filename){
Expand Down
4 changes: 3 additions & 1 deletion model.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,9 @@ struct Model{
void schedulersStep(int step);
int getDownscaleFactor(int step);
void afterTrain(int step);
void savePlySplat(const std::string &filename);
void save(const std::string &filename);
void savePly(const std::string &filename);
void saveSplat(const std::string &filename);
void saveDebugPly(const std::string &filename);
torch::Tensor mainLoss(torch::Tensor &rgb, torch::Tensor &gt, float ssimWeight);

Expand Down
4 changes: 2 additions & 2 deletions opensplat.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -135,7 +135,7 @@ int main(int argc, char *argv[]){

if (saveEvery > 0 && step % saveEvery == 0){
fs::path p(outputScene);
model.savePlySplat((p.replace_filename(fs::path(p.stem().string() + "_" + std::to_string(step) + p.extension().string())).string()));
model.save((p.replace_filename(fs::path(p.stem().string() + "_" + std::to_string(step) + p.extension().string())).string()));
}

if (!valRender.empty() && step % 10 == 0){
Expand All @@ -146,7 +146,7 @@ int main(int argc, char *argv[]){
}
}

model.savePlySplat(outputScene);
model.save(outputScene);
// model.saveDebugPly("debug.ply");

// Validate
Expand Down
2 changes: 1 addition & 1 deletion spherical_harmonics.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ torch::Tensor rgb2sh(const torch::Tensor &rgb){

torch::Tensor sh2rgb(const torch::Tensor &sh){
// Converts from 0th spherical harmonic coefficients to RGB values [0,1]
return (sh * C0) + 0.5;
return torch::clamp((sh * C0) + 0.5, 0.0f, 1.0f);
}

#if defined(USE_HIP) || defined(USE_CUDA) || defined(USE_MPS)
Expand Down

0 comments on commit ef0461f

Please sign in to comment.