Skip to main content

Tutorial 3: Segmentation and Flattening

As we saw in the "Scanning" tutorial, it’s quite hard to extract useful information out of a “word soup”, even when the ink is quite clear. For this tutorial we’ll continue with the campfire scroll and show how to use virtual unwrapping to produce a flattened image which shows the content clearly.

Two key steps to virtually unwrapping a scroll or manuscript are segmenting a surface from inside the 3D volume and flattening that surface to 2D. The video below shows the idea quite well; or check out the full version (It was made by Dr. Seales’s son and daughter!). The red line during the reconstruction phase represents the surface that we want to virtually unwrap.

The basic principle of “virtual unwrapping” (source)

To perform segmentation and flattening, we'll use Volume Cartographer, a virtual unwrapping toolkit built by EduceLab’s Seth Parker. Volume Cartographer is designed to create meshes along surfaces of a manuscript (e.g. pages or scroll wraps) and then sample the voxels around these meshes to create a 2D image of the manuscript's contents. Volume Cartographer includes many tools and utilities. In this tutorial we’ll be looking at the main VC GUI as well as the vc_render tool.

tip

We’re using the original version of Volume Cartographer in this tutorial. For the latest version, see the Community Projects page or ask in Discord.

First, let's install it:

OS-specific instructions
  1. Install the VcXsrv Windows X Server or a similar X Server (if you use the Chocolatey package manager: choco install vcxsrv).
  2. Run “XLaunch” from the Start Menu, or from “C:\Program Files\VcXsrv\xlaunch.exe”.
  3. Use the default settings, except:
  4. Check that the X Server is running in the tray:
  5. Install Docker Desktop, then restart your computer.
  6. Put the extracted campfire directory in C:\ (or update the path in the Docker command below).
  7. Then run:
docker run -it -v C:\campfire:/campfire --env="DISPLAY=host.docker.internal:0" ghcr.io/educelab/volume-cartographer

Creating a .volpkg

Volume Cartographer works on .volpkg directories, which is a custom format just for Volume Cartographer. Let’s create one using the vc_packager tool, by feeding it the tomographically reconstructed volume (represented as a .tif stack) of the campfire scroll:

vc_packager -v campfire.volpkg -m 1000 -n campfire -u 26.3 -s ~/campfire/rec/ # Or /campfire/rec when using Docker

The material-thickness flag -m is an estimate of the papyrus thickness in microns and is used to help Volume Cartographer's tools automatically decide on good default parameters. Give a descriptive name with the flag -n and set the voxel size to 26.3 um with the flag -u. The defaults can always be overridden later, so even though they are required, don't worry too much about these values.

The output will look like this:

Reading the slice directory...
Slice images found: 477
Analyzing slices [■■■■■■■■■■■■■■■■■■■■■■] 100% [00m:00s<00m:00s] 477/477
Saving to volpkg [■■■■■■■■■■■■■■■■■■■■■■] 100% [00m:02s<00m:00s] 477/477
You now have a campfire.volpkg with 4 items in it:
  • /config.json: Contains some metadata.
  • /paths: Currently empty: this is where raw mesh points will be stored, which we’ll be doing soon.
  • /renders: Also empty: this is where any rendering data will be stored.
  • /volumes: Contains a single directory with the processed .tif image stack.
    • /volumes/<id>/meta.json: Contains metadata about the processed image stack.

Note that all these directories must be present for Volume Cartographer to work, even if they are empty. This is useful to know if you check these directories into git, since git by default does not keep empty directories. In such cases, it's a good idea to put a .gitkeep file in any empty directories (per convention).

Creating a new segment

We will use the main VC GUI app to perform segmentation of the campfire scroll: finding a surface of papyrus and exporting it as a 3D mesh.

Segmentation: finding a surface of papyrus.

Run VC on the command line to open the GUI app (or open /Applications/VC.app on macOS with Homebrew).

Now open the campfire.volpkg directory using “File > Open volpkg”. You should see something like this:

Play around a bit with this interface! You can zoom using the buttons, and go to the previous/next “slice” (traversing the z-axis).

Now let’s create our first segment (raw mesh):
  • Zoom in on the top left, until you can’t zoom any further.
  • Click “New” on in the menu on the right. This creates a new segment with a unique ID (the current timestamp). You'll need this ID later!
  • Click “Pen Tool” at the top.
  • Mark points along the top-left strand of papyrus, as shown in the video below.
  • Click “Pen Tool” again to finish.

Note that your colors might be different, depending on your version of Volume Cartographer. You can customize the colors using the color picker at the bottom. We will use bright red dots for visibility.

You have now created the beginning of a mesh: a two-dimensional line on z=0. This line is really only set for z=0, which you can check by clicking “Next Slice” and seeing the line disappear. Let’s go back to 0 by clicking “Previous Slice”.

We can now automatically create more such lines for all z values from 0 to, say, 100:
  • Be sure you’re back at layer 0.
  • Click “Segmentation Tool”.
  • Set “Maxima Window Width” to “10”.
  • Set “Ending Slice” to “100”.
  • Click “Start”.

This runs an automatic segmentation algorithm which attempts to follow the tracked layer across slices, moving “up” the z-axis. Conceptually, it works something like this:

Now click “Next Slice” a bunch of times (or hold Shift while clicking to advance by 10 slices) to see how well the algorithm aligned your segment with the papyrus, as shown below.

If you notice a discrepancy where the points of your segment are no longer aligned with the layer, you can edit the points in “Segmentation Tool” and relaunch the segmentation algorithm from that point going forward.

Let’s practice editing the line to match more closely with the papyrus:
  • Click “Segmentation Tool”.
  • Carefully click and drag on the points you want to move, as shown in the video below.
  • By default, a neighborhood of points is moved when you click and drag. To adjust the size of the neighborhood, adjust the “Impact Range” slider in the bottom right.
  • Set “Ending Slice” to “100”.
  • Click “Start”.

You created your first segment! Be sure to save it using “File > Save volpkg”. The point cloud for this segment is stored in campfire.volpkg/paths/<your-segment-id>/pointset.vcps. If you want to view the point cloud in MeshLab, you can convert it to an OBJ file using vc_convert_pointset:

vc_convert_pointset -i campfire.volpkg/paths/<your-segment-id>/pointset.vcps -o points.obj

Rendering the segment

In order to see the content on the surface of our segment, we need to flatten and texture the segment using the vc_render tool. While the output from vc_render is considered a "final result" that can be placed anywhere, our convention is to place all results in a working directory within the .volpkg so that all data is kept together:

mkdir campfire.volpkg/working
cd campfire.volpkg/working

Now run vc_render with the following arguments, substituting <your-segment-id>:

# From within campfire.volpkg/working
vc_render -v ../ -s <your-segment-id> -o out.obj --output-ppm out.ppm

If you don't remember your segment's ID, you can use vc_volpkg_explorer to list all segment IDs, or type ../paths/2023, hit Tab to autocomplete, and then remove the ../paths/ prefix (e.g. -s $(basename ../path/20230315130225)).

Note that sometimes vc_render can throw an error or a segmentation fault. When this happens, the first thing to try is to rerun the program, because there is a non-deterministic bug in vc_render.

Behind the scenes, vc_render will flatten the 3D surface, attempt to detect the object's ink inside the volume data, and output the result as a virtually unwrapped image. You should get 4 files in the working directory: .mtl, .obj, .tif, and our explicitly specified out.ppm.

  • out.obj is a “3D mesh object” file, which we can render in MeshLab, just like in “Tutorial 2: Scanning”.
  • out.mtl is a “material file”, which tells MeshLab and other 3D viewers how to display the OBJ mesh.
  • out.tif is a "texture image" for the mesh. In this case, it is the flattened, virtually unwrapped result for our segment!
  • out.ppm is a "per-pixel map" which lets us map from 2D to 3D more easily. We'll use it more in a bit.

Let’s look at the unwrapped .tif first, which should look something like this:

If you look back to the last page of the campfire scroll (before carbonization), can you see which area of the scroll this segment came from?

Now open the .obj file in MeshLab to view the 3D structure of the mesh:

The mesh uses the .tif file as a texture, so you can actually see the 3D placement of the metallic ink.

If you additionally load the full mesh we created at the end of “Tutorial 2”, you can more clearly see the location of this segment with respect to the rest of the scroll. For this to work, you might need to generate a full mesh again using Fiji (just like in "Tutorial 2: Scanning"), but this time using the image stack in the campfire.volpkg/volumes directory, since vc_packager might have done some transformations to the slices.

Ink visibility

Depending on what surface you chose to segment, you might notice that some of the scroll's ink shows up quite clearly in the texture images but other inks do not. What gives?

The answer to this question is that not all inks have the same radiodensity. Some inks, like iron gall, show up quite clearly in CT scans because they absorb more x-rays than the papyrus on which they sit. This creates high contrast between the bright iron gall ink voxels and the less bright papyrus voxels. Carbon-based inks, on the other hand, have a very similar radiodensity to papyrus and thus have low contrast when compared against the papyrus voxels. More often than not, the contrast is so low for carbon ink that it is impossible to differentiate the ink from the papyrus when looking at the volume data with the naked eye.

As we will discuss in "Tutorial 5", this does not mean that the ink is invisible or undetectable. In fact, we know these inks often can be detected with machine learning, and that's what we're all here to do! Before we do that, though, let's look a little closer at how vc_render works and simplify our dataset down to only what's needed.

Surface volumes

When looking for ink in the volume, vc_render looks at more than just the voxels that directly intersect our segment mesh. It also looks a little bit “above“ and “below“ the mesh, at the neighborhood of voxels that surround our segment. Conceptually, this neighborhood looks something like this (though this video is exaggerated):

Building a neighborhood of voxels around our segment. Everything inside the neighborhood is used for texturing.

To generate a single .tif file, vc_render searches through this neighborhood, looking for ink, and places the results of that search in the flattened output image. When developing new ink detection methods (as we will be doing in this contest) we also need to have access to this neighborhood so that we can perform our own search for ink. However, we don't want to have to load the full scroll volume since we only need to use a small portion of it. Ideally, we would have a new volume, a “surface volume,” which contains only those voxels that are relevant to the ink detection task. Fortunately, Volume Cartographer gives us a way to do just that!

The out.ppm file that we generated with vc_render contains a mapping between our flattened output image and the original 3D surface. With this file, we can transform the 3D neighborhood into a simplified surface volume. Conceptually, that process looks something like this:

Flattening of the subvolume.

To generate the surface volume, we need to run one more program:

# From within campfire.volpkg/working
mkdir layers
vc_layers_from_ppm -v ../ -p out.ppm --output-dir layers/

Now in the layers directory you’ll find another image stack! You can open this in Fiji again, which should then look something like this:

We can look at this with the Volume Viewer plugin:

Flattening and render options

Our segment mesh is 3D, but our output .tif image is 2D, as are each of our images in the “surface volume” image stack. How does this mapping actually work?

As mentioned in the first tutorial, this process is called “flattening”. Similar to creating a 2D map of the Earth, there are many methods you can use to convert the 3D mesh of our segment into a flat surface, and each of those methods preserve certain aspects of the 3D shape at the expense of others aspects. Volume Cartographer currently provides three flattening methods:

  • Least-squares conformal maps (LSCM): An angle-preserving method which tries to minimize the error between the mesh's 2D and 3D angles in a least-squares sense. Works well for surfaces with low curvature, but can often result in 2D triangles that are many times larger than they should be.
  • Angle-based Flattening (ABF): An angle-preserving method which additionally uses the error in a triangle's neighbors to reduce area distortion. Much less sensitive to high curvature than LSCM, and is the current default in vc_render.
  • Orthographic projection: A method that is similar to taking a photograph of the 3D mesh but without the effects of perspective. In particular, objects that are further from the camera are not smaller than objects that are closer to the camera. Not a good option for scrolls or surfaces that wrap on themselves, but can work well for surfaces that are already semi-flat (i.e. fragments, pages of books).

You can select between the different flattening methods using the --uv-algorithm flag in vc_render:

Flattening & UV Options:
--uv-algorithm arg (=0) Select the flattening algorithm:
0 = ABF
1 = LSCM
2 = Orthographic Projection

There lots of other options, too, which you can see by using vc_render --help. For example, you can select different texturing methods, instead of just picking the maximum pixel value. You can also run your own texturing methods on the “surface volume” image stack directly. In a sense, the separate “Ink Detection” machine learning step can be seen as a particularly fancy texturing method!

Try experimenting with the different options and see what happens.

The tools used in this tutorial have been updated throughout Vesuvius Challenge by EduceLab and our community, and the changes are sure to continue.

A different approach for segmentation involves using Thaumato Anakalyptor by Julian Schilliger. A high level explanation of its way of segmenting is outlined in the next tutorial: “Segmentation for Fishermen”.

Check out The Segmenter’s Guide to Volume Cartographer (for contractors) and check in on Discord to catch up on the state of the art.