crawdad

Cross-platform persistent and distributed web crawler :crab:

  • 所有者: schollz/crawdad
  • 平台:
  • 許可證: MIT License
  • 分類:
  • 主題:
  • 喜歡:
    0
      比較:

Github星跟蹤圖

Crawl responsibly.

For a tutorial on how to use crawdad see my blog post.

Features

  • Written in Go
  • Cross-platform releases
  • Persistent (interruptions can be re-initialized)
  • Distributed (multiple crawdads can be run on diferent machines)
  • Scraping using pluck
  • Uses connection pools for lower latency
  • Uses threads for maximum parallelism

Install

First get Docker CE. This will make installing Redis a snap.

Then, if you have Go installed, just do

$ go get github.com/schollz/crawdad

Otherwise, use the releases and download crawdad.

Run

First run Redis:

$ docker run -d -v `pwd`:/data -p 6379:6379 redis 

which will store the database in the current directory.

Crawling

By "crawling* the crawdad will follow every link that corresponds to the base URL. This is useful for generating sitemaps.

Startup crawdad with the base URL:

$ crawdad -set -url https://rpiai.com

This command will set the base URL to crawl as https://rpiai.com. You can run crawdad on a different machine without setting these parameters again. E.g., on computer 2 you can run:

$ crawdad -server X.X.X.X

where X.X.X.X is the IP address of computer 2. This crawdad will now run with whatever parameters set from the first one. If you need to re-set parameters, just use -set to specify them again.

Each machine running crawdad will help to crawl the respective website and add collected links to a universal queue in the server. The current state of the crawler is saved. If the crawler is interrupted, you can simply run the command again and it will restart from the last state.

When done you can dump all the links:

$ crawdad -dump dump.txt

which will connect to Redis and dump all the links to-do, doing, done, and trashed.

Pinching

By "pinching" the crawdad will follow the specified links and extract data from each URL that can be dumped later.

You will need to make a pluck TOML configuration file. For instance, I would like to scrape from my site, rpiai.com, the meta description and the title. My configuration, pluck.toml, looks like:


name = "description"
activators = ["meta","name","description",'content="']
deactivator = '"'
limit = 1


name = "title"
activators = ["<title>"]
deactivator = "</title>"
limit = 1

Now I can crawl the site the same way as before, but load in this pluck configuration with --pluck so it captures the content:

$ crawdad -set -url "https://rpiai.com" -pluck pluck.toml

To retrieve the data, then you can use the -done flag to collect a JSON map of all the plucked data.

$ crawdad -done data.json

This data JSON file will contain each URL as a key and a JSON string of the finished data that contain keys for the description and the title.

$ cat data.json, grep why
"https://rpiai.com/why-i-made-a-book-recommendation-service/index.html": "{\"description\":\"Why I made a book recommendation service from scratch: basically I found that all other book suggestions lacked so I made something that actually worked.\",\"title\":\"What book is similar to Weaveworld by Clive Barker?\"}"

Advanced usage

There are lots of other options:

   --server value, -s value       address for Redis server (default: "localhost")
   --port value, -p value         port for Redis server (default: "6379")
   --url value, -u value          set base URL to crawl
   --exclude value, -e value      set comma-delimted phrases that must NOT be in URL
   --include value, -i value      set comma-delimted phrases that must be in URL
   --seed file                    file with URLs to add to queue
   --pluck value                  set config file for a plucker (see github.com/schollz/pluck)
   --stats X                      Print stats every X seconds (default: 1)
   --connections value, -c value  number of connections to use (default: 25)
   --workers value, -w value      number of connections to use (default: 8)
   --verbose                      turn on logging
   --proxy                        use tor proxy
   --set                          set options across crawdads
   --dump file                    dump all the keys to file
   --done file                    dump the map of the done things file
   --useragent useragent          set the specified useragent
   --redo                         move items from 'doing' to 'todo'
   --query                        allow query parameters in URL
   --hash                         allow hashes in URL
   --no-follow                    do not follow links (useful with -seed)
   --errors value                 maximum number of errors before exiting (default: 10)
   --help, -h                     show help
   --version, -v                  print the version

Dev

To run tests

$ docker run -d -v `pwd`:/data -p 6377:6379 redis
$ cd src && go test -v -cover

License

MIT

主要指標

概覽
名稱與所有者schollz/crawdad
主編程語言Go
編程語言Go (語言數: 1)
平台
許可證MIT License
所有者活动
創建於2017-06-20 01:06:15
推送於2019-05-10 02:22:28
最后一次提交2019-05-09 20:21:32
發布數9
最新版本名稱v3.1.1 (發布於 )
第一版名稱v0.1.0 (發布於 2017-06-24 08:27:22)
用户参与
星數63
關注者數7
派生數9
提交數103
已啟用問題?
問題數10
打開的問題數1
拉請求數4
打開的拉請求數0
關閉的拉請求數0
项目设置
已啟用Wiki?
已存檔?
是復刻?
已鎖定?
是鏡像?
是私有?