Skip to main content

Community Projects

Since launching the Vesuvius Challenge, our community has generated an incredible set of tools to help with various parts of the pipeline.

Efforts can generally be grouped into two categories:


Some segments from Scroll 1

Segmentation is the mapping of sheets of papyrus (“segments”) in a 3D X-ray volume. See Tutorial 3. The community has built various tools to do this.

We also have a hired segmentation team using these tools to generate segments. These are available for anyone to download. See Data -> Segments for more information.

What are people working on?

Next milestone: segmenting all papyrus sheets of all scrolls. For this we will need tools like:
  • Segmenting the latest scans
  • Better tools for high-accuracy segmentation or correcting
  • Autosegmentation (minimal human input)
  • Merging of segments
  • Visualizing segments both in 3D and in flattened form

Volume Cartographer

Original Github repo / Faster version by Julian (@RICHI) / Even more features by Philip (@spacegaier)

The main tool that the segmentation team currently uses is Volume Cartographer.

This tool was originally created by EduceLab (in particular Seth Parker), and is now being improved both by EduceLab and the Vesuvius Challenge community. The tool has been improved greatly, with more accurate and faster segmentation algorithms and UI improvements.



Github repo / Brett Olsen’s fork

A new tool created by Chuck (@khartes_chuck on Discord) which allows for very precise segmentation, using real-time texturing UIs that show the segment from different angles. It is used on occasion by the segmentation team.

VC Whiteboard and Segment Viewer

VC Whiteboard Github repo / Segment Viewer Github repo

These are used by the segmentation team primarily to see which segments they have worked on already. We host them respectively here and here. The datasets that these live versions pull from are updated about once per month.

Segment Browser

Vesuvius Segment Browser / Github repo


A web-based tool to browse layers and open source ink detection results of all released segments.


Code download

Visualizing ink on segments in 3D.


Github repo

A reimplementation of Volume Cartographer in Python by Moshe Levy (@moshelevy on Discord). Not quite feature-complete yet (in particular it’s missing Julian’s improvements), but long term this might be easier to maintain than the C++ codebase of Volume Cartographer.


Github repo

Created by EduceLab for annotating a large air gap in Scroll 1, and then projecting from that gap to either side to create two large segments, colloquially referred to as the “Monster Segment”. Hasn’t been used for more segmentation, since it was the only large air gap we could find.

QuickSegment has a built in tutorial which you can find when going to “Help” (in the menu bar) => “Tutorial”. Do note that QuickSegment only works on thumbnail volumes (e.g. /volumes_small/ renamed to /volumes/) and then scaled up using vc_transform_mesh.

Data transformations.

A couple of folks have written data transformations that allow tools to more efficiently load data from the server.

  • Masked Slices. by James Darby (@thatGuy on Discord). Versions of the scroll slices with irrelevant data masked out, which leads to about 2x smaller files when applying compression. The tradeoff is that loading the files is a bit slower because of the compression. Used for /volumes_masked/ on the data server.
  • Vesuvius-Build. by Santiago Pelufo (@spelufo on Discord). Used for /volumes_small/ and /volume_grids/ on the data server.

Viewing tools

Various standard tools exist for viewing 3D data. Some have been augmented to support the file formats that we’re dealing with.

  • Meshlab. Most useful for viewing segments. We introduce how to use this with our data in Tutorial 2 and subsequent tutorials.
  • ImageJ/Fiji. Useful for viewing surface volumes. We introduce how to use this with our data in Tutorial 2 and subsequent tutorials.
  • Blender. Generic 3D program. Adapted for segment viewing by Santiago Pelufo (@spelufo on Discord). Tutorial can be found here and here.
  • ilastik. Generic segmentation toolkit. Adapted by Santiago Pelufo (@spelufo on Discord) for use with our volumes.
  • 3D Slicer. Check out this tutorial by James Darby (@thatGuy on Discord).

Ink Detection

There are two major avenues people have been pursuing for detecting ink in the scrolls, and for building training datasets.

  1. Fragment-based. Training ML models on detached scroll fragments with photographic ground truth, then running them on scroll segments. This was the method used to prove the concept of recovering Herculaneum ink from CT scans. Some prizes were created around this, like the Ink Detection prize on Kaggle. Resulted in Youssef’s First Letters Prize results.
  2. Scroll-based. Searching the intact scrolls for the “crackle pattern” discovered by Casey Handmer. Resulted in Luke’s First Letters Prize results.

It may turn out that both fragment data and scroll data can be combined for effective training sets. For now, training directly on scroll data removes the domain shift between fragments and scrolls, and yields stronger results.

What are people working on?

Next milestone: Improving the precision seen in the Grand Prize results, so that we can recover more high quality passages.

Fragment-based ink detection

Youssef’s model from the First Letters Prize.
Some different things people have pursued:

We also ran a fragment-based Ink Detection prize on Kaggle. These were the top 10 results:

Ryan Chesler’s 4µm vs 8µm experiment

Scroll-based ink detection

Casey Handmer discovered a “crackle pattern” in Scroll 1 that often visually indicates the presence of ink.

Following this discovery, several projects have built on labeling ink directly in the scroll data:
Luke’s First Letters Prize submission.