scaws

Extensions for using Scrapy on Amazon AWS

  • 所有者: scrapinghub/scaws
  • 平台:
  • 许可证:
  • 分类:
  • 主题:
  • 喜欢:
    0
      比较:

Github星跟踪图

=====
scaws

This project contains some components and extensions for using Scrapy on Amazon
AWS.

Requirements

  • Scrapy 0.13 or above
  • boto 1.8 or above

Install

Download and run: python setup.py install

Available components

SimpleDB stats collector

Module: scaws.statscol

.. class:: SimpledbStatsCollector

A Stats collector which persists stats to `Amazon SimpleDB`_, using one
SimpleDB item per scraping run (ie. it keeps history of all scraping runs).
The data is persisted to the SimpleDB domain specified by the
`STATS_SDB_DOMAIN`_ setting. The domain will be created if it
doesn't exist.

In addition to the existing stats keys, the following keys are added at
persitance time:

    * ``spider``: the spider name (so you can use it later for querying stats
      for that spider)
    * ``timestamp``: the timestamp when the stats were persisted

Both the ``spider`` and ``timestamp`` are used to generate the SimpleDB
item name in order to avoid overwriting stats of previous scraping runs.

As `required by SimpleDB`_, datetimes are stored in ISO 8601 format and
numbers are zero-padded to 16 digits. Negative numbers are not currently
supported.

This Stats Collector requires the `boto`_ library.

.. _Amazon SimpleDB: http://aws.amazon.com/simpledb/
.. _required by SimpleDB: http://docs.amazonwebservices.com/AmazonSimpleDB/2009-04-15/DeveloperGuide/ZeroPadding.html
.. _boto: http://code.google.com/p/boto/

This Stats Collector can be configured through the following settings:

  • STATS_SDB_DOMAIN_
  • STATS_SDB_ASYNC_

.. _STATS_SDB_DOMAIN:

STATS_SDB_DOMAIN


Default: ``'scrapy_stats'``

A string containing the SimpleDB domain to use for collecting the stats.

.. _STATS_SDB_ASYNC:

STATS_SDB_ASYNC
~~~~~~~~~~~~~~~

Default: ``False``

If ``True``, communication with SimpleDB will be performed asynchronously. If
``False`` blocking IO will be used instead. This is the default as using
asynchronous communication can result in the stats not being persisted if the
Scrapy engine is shut down in the middle (for example, when you run only one
spider in a process and then exit).

主要指标

概览
名称与所有者scrapinghub/scaws
主编程语言Python
编程语言Python (语言数: 1)
平台
许可证
所有者活动
创建于2011-07-19 17:02:26
推送于2012-12-05 19:42:10
最后一次提交2012-12-05 17:41:58
发布数0
用户参与
星数32
关注者数4
派生数29
提交数9
已启用问题?
问题数0
打开的问题数0
拉请求数1
打开的拉请求数0
关闭的拉请求数0
项目设置
已启用Wiki?
已存档?
是复刻?
已锁定?
是镜像?
是私有?