Forensics in Telecommunications, Information, and Multimedia. Third International ICST Conference, e-Forensics 2010, Shanghai, China, November 11-12, 2010, Revised Selected Papers

Research Article

A Novel Forensics Analysis Method for Evidence Extraction from Unallocated Space

Download
541 downloads
  • @INPROCEEDINGS{10.1007/978-3-642-23602-0_5,
        author={Zhenxing Lei and Theodora Dule and Xiaodong Lin},
        title={A Novel Forensics Analysis Method for Evidence Extraction from Unallocated Space},
        proceedings={Forensics in Telecommunications, Information, and Multimedia. Third International ICST Conference, e-Forensics 2010, Shanghai, China, November 11-12, 2010, Revised Selected Papers},
        proceedings_a={E-FORENSICS},
        year={2012},
        month={10},
        keywords={Computer Forensics Fingerprint Hash Table Bloom Filter Fragmentation Fragmentation Point},
        doi={10.1007/978-3-642-23602-0_5}
    }
    
  • Zhenxing Lei
    Theodora Dule
    Xiaodong Lin
    Year: 2012
    A Novel Forensics Analysis Method for Evidence Extraction from Unallocated Space
    E-FORENSICS
    Springer
    DOI: 10.1007/978-3-642-23602-0_5
Zhenxing Lei1,*, Theodora Dule1,*, Xiaodong Lin1,*
  • 1: University of Ontario Institute of Technology
*Contact email: Zhenxing.Lei@uoit.ca, Theodora.Dule@uoit.ca, Xiaodong.Lin@uoit.ca

Abstract

Computer forensics has become a vital tool in providing evidence in investigations of computer misuse, attacks against computer systems and more traditional crimes like money laundering and fraud where digital devices are involved. Investigators frequently perform preliminary analysis at the crime scene on these suspect devices to determine the existence of target files like child pornography. Hence, it is crucial to design a tool which is portable and which can perform efficient preliminary analysis. In this paper, we adopt the space efficient data structure of fingerprint hash table for storing the massive forensic data from law enforcement databases in a flash drive and utilize hash trees for fast searches. Then, we apply group testing to identify the fragmentation points of fragmented files and the starting cluster of the next fragment based on statistics on the gap between the fragments.