Corto is a library for compression and decompression meshes and point clouds (C++ and Javascript).

Source code can be found on Github

Corto supports point clouds and meshes with per vertex attributes: normals, colors, texture coordinates and custom attributes.

The main focus of this work is on decompression speed, see performances, both for C++ lib and javascript, while still providing good compression rates.

  • corto is program to compress .ply and .obj models
  • libcorto is a C++ compression/decompression library
  • corto.js a javascript library for .crt decompression
  • CORTOLoader, a threejs loader.

This work is based on the compression algorithm developed for the Nexus project for creation and visualization of multiresolution models. This approach has many advantages for large models in term of streaming and adaptive rendering.
See Fast decompression for web-based view-dependent 3D rendering.



Bunny 62KB 34KT
Courtesy of Stanford


Laurana 95KB 50KT
Courtesy of Stanford


Proserpina 0.98MB 0.25MT
Courtesy of egiptologo91


Buddha 0.47MB 0.20MT
Courtesy of VCG


Tarta 5.6MB 5.3MT
Courtesy of VCG


Palmyra 3.5MB 0.5MV
Courtesy of robogoat


A small executable to convert .ply and .obj into compressed format .crt


corto [options] FILE

FILE is the path to a .ply or a .obj 3D model.

	-o <output>: filename of the .crt compressed file.
						if not specified the extension of the input file will be replaced.
	-e <key=value>: add an exif property, or more than one.
	-p : treat the input as a point cloud.
	-v <bits>: vertex bits quantization. If not specified an euristic is used
	-n <bits>: normal bits quantization. Default 10.
	-c <bits>: color bits quantization. Default 6.
	-u <bits>: texture coordinate bits. Default 10.
	-q <step>: quantization step unit (float) instead of bits for vertex coordinates
	-N <prediction>: normal prediction can be:
						delta: use difference from previous normal (fastest)
						 estimated: use difference from compute normals (cheaper)
						border: store difference only for boundary vertices (cheapest)
	-A : compute and add normals to the model.
	-P <file.ply>: decompress and save as .ply for debugging purpouses

Material groups for obj (newmtl) and ply with texnumbers are preserved into the crt model, additional properties can be stored in the file using the -e option.

-v 12 will result in the quantization of the coordinates to 1/4096th of the largest dimension in the bounding box. If not specified the quantization step is set to 1/20th of the average edge.

-n 10 normals are stored using the octahedron parametrization to a square of 1024x1024 positions, resulting in a better sample distribution than quantization in 3D.


CORTOLoader.js is similar to THREE.OBJLoader in functionality, and can easily replace it in three.js.

var loader = new THREE.CORTOLoader({ path: "models/" });  
//materials autocreated, pass otherwise option loadMaterial: false

loader.load("bunny.crt", function(mesh) {
	//needed to render on texture load
	mesh.addEventListener("change", render);



CORTOLoader is pretty self-explanatory in how to create a THREE.Mesh:

var decoder = new CortoDecoder(blob); //where blob is an ArrayBuffer
var model = decoder.decode();

var geometry = new THREE.BufferGeometry();
	geometry.setIndex(new THREE.BufferAttribute(model.index, 1));

if(model.groups.length > 0) {
	var start = 0;
	for(var i = 0; i < model.groups.length; i++) {
		var g = model.groups[i];
		geometry.addGroup(start*3, g.end*3, i);
		start = g.end;

geometry.addAttribute('position', new THREE.BufferAttribute(model.position, 3));
	geometry.addAttribute('color', new THREE.BufferAttribute(model.color, 3, true));
if (model.normal)
	geometry.addAttribute('normal', new THREE.BufferAttribute(model.normal, 3, true));
if (model.uv)
	geometry.addAttribute('uv', new THREE.BufferAttribute(model.uv, 2));

var mesh = new THREE.Mesh(geometry);


CortoDecoder class decodes a .crt as an arraybuffer and returns an objects with attributes (positions, index, colors etc).

<script src="js/corto.js"> </script>

var request = new XMLHttpRequest();'GET', 'bunny.crt');
request.responseType = 'arraybuffer';
request.onload = function() {
	var decoder = new CortoDecoder(this.response);
	var model = decoder.decode();
	console.log(model.nvert, model.nface, model.groups);
	console.log('Index: ', model.index);
	console.log('Positions: ', model.position);
	console.log('Colors: ', model.color);
	console.log('Nornmals: ', model.normal);
	console.log('Tex coords: ', model.uv);
	//custom attributes can be encoded, see cortolib below for details.


Interface is not entirely stable, no mayor change is expected. See src/main.cpp for an extensive example.

#include <corto/encoder.h>
#include <corto/decoder.h>

std::vector<float> coords;
std::vector<uint32_t> index;
std::vector<float> uv;
std::vector<float> radius;

//fill data arrays...

crt::Encoder encoder(nvert, nface);

//add attributes to be encoded
encoder.addPositions(,, vertex_quantization_step);
encoder.addUvs(, pow(2, -uv_bits));

//add custom attributes
encoder.addAttribute("radius", (char *), crt::VertexAttribute::FLOAT, 1, 1.0f);

const char *compressed_data =;
const uint32_t compressed_size =;

crt::Decoder decoder(size, data);

//allocate memory if needed

//tell the decoder where to decompress data

if("uv")) {
//actually decode


Encoder(uint32_t _nvert, uint32_t _nface = 0, Stream::Entropy entropy = Stream::TUNSTALL);

The Encoder class assume the input is an indexed mesh, specify the number of vertices and faces (0 for point clouds). For debug or entropy compression test Stream supports lzip and lz4, they can be enabled defining ENTROPY_TESTS in the makefile.

Vertex coordinates, attributes and indices are passed to the encoder (assuming they respect the nvert, nface paramenters: 3*nvert for positions and normals, 4*nvert for colors, 2*nvert for uv, 3*nface for indices)


bool addPositions(const float *buffer, float q = 0.0f, Point3f o = Point3f(0.0f)); //for point clouds
bool addPositions(const float *buffer, const uint32_t *index, float q = 0.0f, Point3f o = Point3f(0.0f));
bool addPositions(const float *buffer, const uint16_t *index, float q = 0.0f, Point3f o = Point3f(0.0f));

q specify the quantization parameter, while o the origin of the quantization (v[i*3+k] = (buffer[i*3+k] - o[k])/q

If q is not specified the quantization step is estimated as 1/20th of the average edge length (or a function of the volume and number of the points for point clouds).

bool addPositionsBits(const float *buffer, int bits);
bool addPositionsBits(const float *buffer, uint32_t *index, int bits);
bool addPositionsBits(const float *buffer, uint16_t *index, int bits);

It might be more convenient to specify quantization step in term of the bits needed per coordinate


bool addNormals(const float *buffer, int bits, NormalAttr::Prediction no = NormalAttr::ESTIMATED);
bool addNormals(const int16_t *buffer, int bits, NormalAttr::Prediction no = NormalAttr::ESTIMATED);

bool addColors(const unsigned char *buffer, int rbits = 6, int gbits = 7, int bbits = 6, int abits = 5);

bool addUvs(const float *buffer, float q = 0);

Normal quantization is expressed in bits, referring to the octahedron parametrization space.

Prediction for normal attribute can be

  • DELTA: difference from previous normal, it's fast but provide less compression
  • ESTIMATED: difference from normal estimated from the vertex coordinates (and wont apply to point clouds) slower than DELTA, but especially when coordinates are saved with high precision will provide better compression
  • BORDER: will only preserve boundary normals. This is especially useful if the model is split into pieces (for example in Nexus multiresolution viewer.

Color quantization for red, green, blue, alpha. Internally the color is stored as poor man YCbCr color space (where G estimates Y!), might be changed to something better and quantization refer to the internal space

Generic attributes

bool addAttribute(const char *name, const char *buffer, VertexAttribute::Format format, 
	int components, float q, uint32_t strategy = 0);
bool addAttribute(const char *name, char *buffer, VertexAttribute *attr);


void addGroup(int end_triangle);
void addGroup(int end_triangle, std::map &props);


void encode();

After the decode call the numnber of vertices and faces might be different: degenerate triangles are removed and duplicated points in point clouds are also removed

uint32_t nvert = encoder.nvert;
uint32_t nface = encoder.nface;

The .crt bytes are stored in the stream member

size_t count =;
size_t written = fwrite (, 1, count, file);

For details on compression bitrates see main.cpp.


Decoder(int len, const uchar *input);

Upon initializatiom, the decoder class will provide info about what is presente in the .crt, default attributes are position, normal, color, uv.

uint32_t nvert = encoder.nvert;
uint32_t nface = encoder.nface;

bool hasAttr(const char *name);

You are in charge to pass the decoder an allocated memory buffer where to decode attributes and index

	bool setPositions(float *buffer);
	bool setNormals(float *buffer);
	bool setNormals(int16_t *buffer);
	bool setColors(uchar *buffer);
	bool setUvs(float *buffer);

	void setIndex(uint32_t *buffer);
	void setIndex(uint16_t *buffer);

Custom attributes can be requested, either using the default attribute type or a custom class.

	bool setAttribute(const char *name, char *buffer, VertexAttribute::Format format);
	bool setAttribute(const char *name, char *buffer, VertexAttribute *attr);
	void decode();