How-to create a Python scraper

From openZIM
Revision as of 21:39, 5 July 2024 by Josephlewis42 (talk | contribs) (Added additional resources and developing the scraper How-To section)
Jump to navigation Jump to search

This page is a high level outline about the considerations you need to create a new scraper that produces ZIM files that can be used in Kiwix or other compatible readers.

Developing the scraper

  1. Decide what resource you want to create a scraper for.
    1. If the resource is a website, check to see if https://zimit.kiwix.org/ works.
    2. Make none of the existing scrapers work for your use-case.
  2. Decide how you'll want to implement the scraper and put together a proposal including the following and submit a request in the zim-requests repository so the community can give you feedback and create a repository for you if needed (Example). Some questions you might want to answer in the request are:
    • Information about the resource you want to scrape.
    • Why create a new scraper versus using one that already exists.
    • A rough sketch of your proposal.
  3. Implement the scraper using the Python bootstrap repository as a basis.

Best practices

A Python scraper should ideally:

- Adhere to openZIM's Contribution Guidelines and implement openZIM's Python bootstrap, conventions and policies

- by hosted on Github under the openzim organization (we can create you a repo there on request)

- use the python-scraperlib (zimscraperlib on PyPi) to create the ZIM (and there are many useful utilities as well)

- reencode images and videos so that the final ZIM size is (by default at least) moderate

- cache these reencoded assets on an S3 bucket (we can provide you with a dev bucket on request) so that scraper avoids to loose time / computing resources reencoding them at every ZIM update

- be configurable with CLI flags, especially for ZIM metadata (title, description, tags, ...) and filename

- validate all these metadata as early as possible to avoid spending time fetching online resources and transforming them only to realize in the end that metadata are not valid and we cannot produce a ZIM

- avoid as much on possible to rely on the filesystem, i.e. prefer to add items to the ZIM on-the-fly rather than arranging every files on the filesystem and adding them to the ZIM only in a final stage

- consume as little resources as possible (CPU time, disk IOs, disk space, RAM memory, ...)

- implement proper logging with various log levels (error, warning, info, debug)

- implement a task progress JSON file so that integration in Zimfarm will be smoother

How to develop a nice UI to run inside the ZIM

Original scrapers are using Jinja2 to render HTML files dynamically and add them to the ZIM. We are currently migrating to another approach where the UI running inside the ZIM is a Vue.JS project. We are not yet certain which approach is best. Vue.JS allows to quickly built very dynamic interfaces in a clean way, where Jinja2 approach usually relied on "crappy" JS based on JQuery and stuff like that. However Vue.JS comes with a probably more limited set of supported browsers and induces a more steep learning curve to contribute on scrapers. Freecodecamp scraper is already using this Vue.JS approach. Youtube scraper is currently migrating to this approach. Kolibri scraper has began to migrate both stuff is still stuck in a v2 branch.

Additional resources