sis 20(25): e6

Research Article

Optimised Transformation Algorithm For Hadoop Data Loading in Web ETL Framework

Download443 downloads
  • @ARTICLE{10.4108/eai.13-7-2018.160600,
        author={Gaurav  Gupta and Neelesh  Kumar and Indu  Chhabra},
        title={Optimised Transformation Algorithm For Hadoop Data Loading in Web ETL Framework},
        journal={EAI Endorsed Transactions on Scalable Information Systems},
        keywords={Redundant Data, Data Transformation, Data Loading, Levenshtein Distance Matching, Hadoop},
  • Gaurav Gupta
    Neelesh Kumar
    Indu Chhabra
    Year: 2019
    Optimised Transformation Algorithm For Hadoop Data Loading in Web ETL Framework
    DOI: 10.4108/eai.13-7-2018.160600
Gaurav Gupta1,*, Neelesh Kumar2, Indu Chhabra3
  • 1: Research Planning & Project Management, CSIR-Indian Institute of Petroleum, Dehradun, India
  • 2: BioMedical Instrumentation, CSIR-Central Scientific Instruments Organisation, Chandigarh, India
  • 3: Department of Computer Science & Applicaions, Panjab University, Chandigarh, India
*Contact email:


Web ETL unlike conventional ETL framework requires considerable improvements in all the three layers i.e. Extraction, Transformation and Loading due to the inherent nature of web input data. Websites are huge and are unique source of information, out of such huge information available on the websites, finding and analysing the required and relevant data is critical as the data may be foul consisting of redundant data or misspelled. Determining integrated record that stands for identical real world entities in abundant ways is the major problem to be analysed for any database. Hence, Web ETL transformation layer functionality of data transformation becomes mandatory in determining the pertinent information to be examined. Since the data on the web is “very voluminous” hence loading only clean data in data warehouse is necessary for fast processing to achieve accurate result. The present research focuses on data transformation in web ETL framework and proposes a modified technique to employ token wise sentence sorting to remove redundant records from the patent database along with Levenshtein distance used for string matching. Afterwards the cleaned data is transformed and loaded from this staging area to hadoop environment. The integration of proposed transformation technique with hadoop system delimits the constraint of data processing, storage and retrieval of large data structure from conventional data warehouse system.