Tutorial 4: Ink Detection
This tutorial gives a high-level overview of our current best methods on how to do ink detection, followed by a more hands-on tutorial on Kaggle as a notebook.
Ink detection is the task of taking data from a 3D X-ray scan around a papyrus surface, and identifying the locations of the inked parts of the papyrus.
This is where one of the difficulties of the Herculaneum Papyri comes in: the ink seems to be radiolucent, making it hard to detect on 3D X-ray scans.
- Campfire & En-Gedi scrolls: Ink shows up as brighter voxels in 3D X-ray scans, so ink detection can be done by taking the brightest pixel in some voxel region.
- Herculaneum scrolls & fragments: Ink does not seem directly visible in 3D X-ray scans, but there does seem to be data there, since machine learning models can detect it.
In the video below Dr. Seales talks about how ink detection works in the Herculaneum scrolls, and how it came about:
There is hope: not only can machine learning models detect the ink, on occasion we can see the ink directly in the 3D X-ray volumes. Here are some examples, with slices from the 3D surface volumes on the left, and infrared photos showing ink on the right (from the recent data paper):


You have to look closely, but the shapes are visible!
So it seems reasonable that the machine learning models can see patterns like this in more places. We train our models on detached fragments, since we have ground truth data in the form of actual visible ink. The idea is to then apply those models to the inside layers of the intact scrolls.
At a high level, training on a fragment works like this:

From a fragment (a) we obtain a 3D volume (b), from which we segment a mesh (c), around which we sample a surface volume (d). We also take an infrared photo (e) of the fragment, which we align (f) with the surface volume, and then manually turn into a binary label image (g).
We train this model by picking a pixel in the binary label image, and sampling a subvolume around the same coordinates from the surface volume. We then backpropagate the known label data to update the model weights:
We can then use the model to predict what a label image would have looked like, from different input data than you have trained on.
Of course, in reality the label image on the right doesn’t come out perfectly. The current state of the art is Stephen Parsons’ ink-id program, which produces outputs like this (showing different training epochs in k-fold training/prediction):
When running ink-id on all the public fragments, the results look like this (prediction left, infrared right):
![]() | ![]() |
![]() | ![]() |
![]() | ![]() |
As you can see, some letters can be clearly seen, others not at all, and a lot of letters are somewhere in between. The Ink Detection Progress Prize is all about creating the best possible machine learning model for detecting ink.
All fragments also have “hidden layers”: pieces of papyrus that are fused to the backs of the fragments. Running the machine model on those reveals some previously unseen letters:



So how can a machine learning model detect ink? In the electron microscope images below (from the paper From invisibility to readability: Recovering the ink of Herculaneum), you can clearly see the difference between the inked and non-inked regions. We suspect that machine learning models are able to learn some of these features from the 3D X-ray scans.

- Model performance. Getting more letters to be legible!
- Applying these models to the full scrolls.
- Reverse engineering the models to better understand the kind of patterns they are using to detect ink.
- Creating more ground truth data (e.g. “campfire scrolls”).
Now let’s create a model! This part of the tutorial is over on Kaggle as a notebook