Download Page for "Hair Photobooth: Geometric and Photometric Acquisition of Real Hairstyles"

Sylvain Paris, Adobe
Will Chang, UCSD
Wojciech Jarosz, UCSD
Oleg Kozhushnyan, MIT CSAIL
Wojciech Matusik, Adobe
Matthias Zwicker, UCSD
Frédo Durand, MIT CSAIL

ACM Transactions on Graphics, 2008 (proceedings of the ACM SIGGRAPH conference)

Official Paper Webpage

Here we provide the data used in the Siggraph 2008 paper "Hair Photobooth: Geometric and Photometric Acquisition of Real Hairstyles." Please cite our paper and acknowledge the source when you use this data. Finally, if you have any further questions, please contact the authors of the paper. Thank you for using our hair dataset!

Geometric Data

We provide the reconstructed hair data and a head model that fits the hair for each of the four hairstyles presented in the paper. The head model is stored as a triangle mesh in standard Wavefront OBJ format.

Strand Data Format

The hair strand data is made of integers and floating point values. Both types of data are coded on 4 bytes. A hairstyle is made of strands. A strand is made of vertices. A vertex is made of (x,y,z) coordinates. The actual format is:
    [INT: number of strands]

      [INT: number of vertices in strand 1]

        [FLOAT: x1] [FLOAT: y1] [FLOAT: z1]
        [FLOAT: x2] [FLOAT: y2] [FLOAT: z2]

      [INT: number of vertices in strand 2]

        [FLOAT: x1] [FLOAT: y1] [FLOAT: z1]
        [FLOAT: x2] [FLOAT: y2] [FLOAT: z2]

The following C++ code reads this file format. We assume that there is a type implementing a 3D vector (in our case Geometry::Vec3f).
std::deque<Geometry::Vec3f> vertices;        /**< Hair vertices */
std::deque<std::deque<int> > lines;          /**< Hair strands */

void load_hairstyle( const char *hairFile )
    ifstream in;, ios_base::binary);

    unsigned int nStrands;
    int currentVertex = 0;*)&nStrands, sizeof(unsigned int));
    cout << "Reading " << nStrands << " strands." << endl;

    for (unsigned int strand = 0; strand < nStrands; strand++) {
        unsigned int numVertices;*)&numVertices, sizeof(unsigned int));

        // For each strand, first read all of the vertices
        for (unsigned int i = 0; i < numVertices; i++) {
            float x, y, z;
  *)&x, sizeof(float));
  *)&y, sizeof(float));
  *)&z, sizeof(float));

            Geometry::Vec3f vertex(x, y, z);

        // Then store a line containing all of those vertices
        deque<int> line;
        for (unsigned int vertex = 0; vertex < numVertices; vertex++) {


Photometric Data

Each dataset contains two capture sequences, with the following images:

  1. 64 Black images: calibration images taken with no lighting.
    Filename format, where ## represents the camera number (0--15):
  2. Capture Sequence 0 (32 images): black_0_0_##.png, black_0_1_##.png
    Capture Sequence 1 (32 images): black_1_0_##.png, black_1_1_##.png
  3. 4800 Hair images: images from each camera taken with each light turned on in succession.
    Filename format, where ### represents the light number (0--149) and ## represents the camera number (0--15):
    Capture Sequence 0 (2400 images): refl0_###_##.png
    Capture Sequence 1 (2400 images): refl1_###_##.png
The files are split into 5 parts each. In order to combine them into a single tar archive, use the command:
$ cat refl-name_aa refl-name_ab refl-name_ac refl-name_ad refl-name_ae > refl-name.tar
where "name" is the name of the dataset.
Each image was taken at 1300x1030 resolution.

Straight Hair
Tangled Hair
Puffy Hair
Wavy Hair

Calibration Information

  1. Dome calibration
    We provide 3D locations for the camera and light, and the projection matrices for the camera for all datasets.
  2. Color calibration


We thank Janet McAndless for her help during the acquisition sessions, all the subjects who participated in the project, Peter Sand for the 2D tracking software, Tim Weyrich for helping with the calibration, John Barnwell for his discussions on hardware issues, MERL for providing the acquisition dome, the MIT pre-reviewers, and the anonymous SIGGRAPH reviewers. This work was supported by an NSF CAREER award 0447561. Frédo Durand acknowledges a Microsoft Research New Faculty Fellowship, a Sloan Fellowship, and a generous gift from Adobe.

Last update: October 1, 2008
Check XHTML | Check CSS