
Research Article
New Zero Watermarking Scheme Based on Hyper-catadioptric System Model and Hyperbolic Geometry
@INPROCEEDINGS{10.1007/978-3-031-81573-7_18, author={Boureima Koussoube and Moustapha Bikienga and Telesphore Tiendrebeogo and Kodjo Atiampo Armand and Boureima Zerbo}, title={New Zero Watermarking Scheme Based on Hyper-catadioptric System Model and Hyperbolic Geometry}, proceedings={Towards new e-Infrastructure and e-Services for Developing Countries. 15th International Conference, AFRICOMM 2023, Bobo-Dioulasso, Burkina Faso, November 23--25, 2023, Proceedings, Part II}, proceedings_a={AFRICOMM PART 2}, year={2025}, month={2}, keywords={Zero watermarking scheme Hyper-catadioptric system model Hyperbolic tree Cryptographic signature}, doi={10.1007/978-3-031-81573-7_18} }
- Boureima Koussoube
Moustapha Bikienga
Telesphore Tiendrebeogo
Kodjo Atiampo Armand
Boureima Zerbo
Year: 2025
New Zero Watermarking Scheme Based on Hyper-catadioptric System Model and Hyperbolic Geometry
AFRICOMM PART 2
Springer
DOI: 10.1007/978-3-031-81573-7_18
Abstract
In this paper we propose a new digital watermarking scheme for securing DICOM images in a distributed database. This new technique introduces no distortion to the images and will serve as a means of authenticating them. The database has a hyperbolic structure and its model is based on the Poincaré disk model, in which a hyperbolic tree is built. The coordinates of the tree nodes represent the virtual coordinates of the virtual servers. We assimilate the database structure to the image plane of a hyper-catadioptric system model. The image will be placed in a Euclidean space. Points will then be selected and computed according to the projection model of the hyper-catadioptric system. The set of image points computed constitutes our cryptographic signature. Each image point will be associated with the nearest node, and each server node will store image points and a model parameter, the plane equation and one of its public keys. Using its private key, the receiver can determine the image points and the various parameters to calculate the inverse transform of each image point for comparison. Formal analysis and simulations show that our approach is robust.