Geert J. Verhoeven

PhD Archaeology



University of Vienna

Franz-Klein-Gasse 1
Room A5.04 (5th floor)
1190 Vienna
Austria



From deposit to point cloud – A study of low-cost computer vision approaches for the straightforward documentation of archaeological excavations


Conference paper


Michael Doneus, Geert J. Verhoeven, Martin Fera, Christian Briese, Matthias Kucera, Wolfgang Neubauer
Proceedings of the XXIIIrd International CIPA Symposium, CIPA, 2011

View PDF
Cite

Cite

APA   Click to copy
Doneus, M., Verhoeven, G. J., Fera, M., Briese, C., Kucera, M., & Neubauer, W. (2011). From deposit to point cloud – A study of low-cost computer vision approaches for the straightforward documentation of archaeological excavations. In Proceedings of the XXIIIrd International CIPA Symposium. CIPA.


Chicago/Turabian   Click to copy
Doneus, Michael, Geert J. Verhoeven, Martin Fera, Christian Briese, Matthias Kucera, and Wolfgang Neubauer. “From Deposit to Point Cloud – A Study of Low-Cost Computer Vision Approaches for the Straightforward Documentation of Archaeological Excavations.” In Proceedings of the XXIIIrd International CIPA Symposium. CIPA, 2011.


MLA   Click to copy
Doneus, Michael, et al. “From Deposit to Point Cloud – A Study of Low-Cost Computer Vision Approaches for the Straightforward Documentation of Archaeological Excavations.” Proceedings of the XXIIIrd International CIPA Symposium, CIPA, 2011.


BibTeX   Click to copy

@inproceedings{doneus2011a,
  title = {From deposit to point cloud – A study of low-cost computer vision approaches for the straightforward documentation of archaeological excavations},
  year = {2011},
  publisher = {CIPA},
  author = {Doneus, Michael and Verhoeven, Geert J. and Fera, Martin and Briese, Christian and Kucera, Matthias and Neubauer, Wolfgang},
  booktitle = {Proceedings of the XXIIIrd International CIPA Symposium}
}

Abstract
Stratigraphic archaeological excavations demand high-resolution documentation techniques for 3D recording. Today, this is typically accomplished using total stations or terrestrial laser scanners. This paper demonstrates the potential of another technique that is low-cost and easy to execute. It takes advantage of software using Structure from Motion (SfM) algorithms, which are known for their ability to reconstruct camera pose and three-dimensional scene geometry (rendered as a sparse point cloud) from a series of overlapping photographs captured by a camera moving around the scene. When complemented by stereo matching algorithms, detailed 3D surface models can be built from such relatively oriented photo collections in a fully automated way. The absolute orientation of the model can be derived by the manual measurement of control points. The approach is extremely flexible and appropriate to deal with a wide variety of imagery, because this computer vision approach can also work with imagery resulting from a randomly moving camera (i.e. uncontrolled conditions) and calibrated optics are not a prerequisite.
For a few years, these algorithms are embedded in several free and low-cost software packages. This paper will outline how such a program can be applied to map archaeological excavations in a very fast and uncomplicated way, using imagery shot with a standard compact digital camera (even if the images were not taken for this purpose). Archived data from previous excavations of VIAS-University of Vienna has been chosen and the derived digital surface models and orthophotos have been examined for their usefulness for archaeological applications. The absolute georeferencing of the resulting surface models was performed with the manual identification of fourteen control points. In order to express the positional accuracy of the generated 3D surface models, the NSSDA guidelines were applied. Simultaneously acquired terrestrial laser scanning data – which had been processed in our standard workflow – was used to independently check the results. The vertical accuracy of the surface models generated by SfM was found to be within 0.04 m at the 95 % confidence interval, whereas several visual assessments proved a very high horizontal positional accuracy as well.

Share



Follow this website


You need to create an Owlstown account to follow this website.


Sign up

Already an Owlstown member?

Log in