blob: 53582c134902f1415ff625b14fabc1b198d708f2 (
plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
|
Scrapers for norweigan post journal sources
===========================================
Classic API code available from
https://bitbucket.org/ScraperWiki/scraperwiki-classic/src/c7f076950476?at=default
https://github.com/rossjones/ScraperWikiX/blob/master/services/scriptmgr/scripts/exec.py
Standalone lib https://github.com/scraperwiki/scraperwiki-python
== Running / testing scrapers ==
To get the scrapers running, one need to set up the data directory and
a patched copy of the scraperwiki-python project. The script
env-setup is provided to do so. Run it from the top of the checked
out scraper directory to set up your own copy.
./env-setup
To run a scraper, use the run-scraper command and give the scraper
name as the argument. For example like this:
./run-scraper postliste-oep
== Common field names ==
List of field names used in most scrapers. All dates uses ISO format,
YYYY-MM-DD, "YYYY-MM-DD HH:MM" or "YYYY-MM-DD HH:MM+TZ".
* agency, name of public administration
* recorddate
* docdate, date on document
* docdesc, title/description of document entry
* doctype, type of documen entry
* caseyear
* caseseqnr
* casedocseq
* caseid
* casedesc
* recipient
* sender
* exemption
* journalyear
* journalseqnr
* journalid
* scraper, name of script used to collect data
* scrapedurl, URL used to collect data
* scrapestamputc, when the information was fetched from the URL
These are doctype values, based on NOARK types.
* U - Outgoing document
* I - Incoming document
* X - Internal document
* N - Internal document
|