
Research Article
The Crawl and Analysis of Recruitment Data Based on the Distributed Crawler
@INPROCEEDINGS{10.1007/978-3-030-62483-5_18, author={Jiancai Wang and Jianting Shi}, title={The Crawl and Analysis of Recruitment Data Based on the Distributed Crawler}, proceedings={Green Energy and Networking. 7th EAI International Conference, GreeNets 2020, Harbin, China, June 27-28, 2020, Proceedings}, proceedings_a={GREENETS}, year={2020}, month={11}, keywords={Distributed crawler Scrapy framework Data processing}, doi={10.1007/978-3-030-62483-5_18} }
- Jiancai Wang
Jianting Shi
Year: 2020
The Crawl and Analysis of Recruitment Data Based on the Distributed Crawler
GREENETS
Springer
DOI: 10.1007/978-3-030-62483-5_18
Abstract
Because of the rapid development of Internet, how to efficiently and quickly obtain useful data has become an importance. In this paper, a distributed crawler crawling system is designed and implemented to capture the recruitment data of online recruitment websites. The architecture and operation workflow of the Scrapy crawler framework is combined with Python, the composition and functions of Scrapy-Redis and the concept of data visualization. Echarts is applied on crawlers, which describes the characteristics of the web page where the employer publishes recruitment information. In the base of Scrapy framework, the middleware, proxy IP and dynamic UA are used to prevent crawlers from being blocked by websites. Data cleaning and encoding conversion is used to make data processing.