aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
-rw-r--r--README.rst85
-rw-r--r--build/test/nms-collector-test.Dockerfile6
-rw-r--r--build/test/nms-db-test.Dockerfile4
-rw-r--r--build/test/nms-front-test.Dockerfile9
-rw-r--r--build/test/nms-varnish-test.Dockerfile6
-rw-r--r--build/test/playbook-test.yml38
6 files changed, 102 insertions, 46 deletions
diff --git a/README.rst b/README.rst
index 7d2f817..a050b63 100644
--- a/README.rst
+++ b/README.rst
@@ -3,30 +3,49 @@ The Gathering Network Management/Monitoring System (tgnms)
This is the system used to monitor the network during The Gathering (a
computer party with between 5000 and 10000 active clients - see
-http://gathering.org).
+http://gathering.org). It is now provided as a stand-alone application with
+the goal of being usable to any number of computer parties and events of
+similar nature.
Unlike other NMS's, it is not designed to run perpetually, but for a
limited time and needs to be effective with minimal infrastructure in place
as it is used during initial installation of the network.
You should be able to install this on your own for other similar events of
-various scales. The requirements you should expect are:
-
-- During The Gathering 2016, we collected about 30GB of data in postgresql.
-- Using a good reverse proxy is crucial to the frontend. We saw between 200
- and 500 requests per second during normal operation of The Gathering
- 2016. Varnish operated with a cache hit rate of 99.99%
-- The number of database updates to expect will vary depending on dhcp
- requests (so depending on dhcp lease times and clients) and amount of
- equipment. Each switch and linknet with an IP is pinged slightly more
- than once per second. SNMP is polled once a minute by default. You do the
- math.
-- We saw about 300 million rows of data at the end of the event in 2016.
- Database indexes are crucial, but handled by default schema. There should
- be no significant performance impact as the data set grows.
-- SNMP polling can be slightly CPU intensive, but we still had no issues
- polling about 184 switches.
-- Ping and dhcptailer performance is insignificant.
+various scales. The system requirements are minimal, but some advise:
+
+- You can run it on a single VM or split it based on roles. Either works.
+- The database is used extensively, but careful attention has been paid to
+ scaling it sensibly.
+- Do not (unless you like high CPU loads) ignore the caching layer
+ (Varnish). We use it extensively and are able to invalidate cache
+ properly if needed so it is not a hindrance.
+
+Some facts from The Gathering 2016:
+
+- Non-profit.
+- 5000+ participants, 400 volunteers/crew, plus numerous visitors.
+- Typically 6000-8000 active clients (dhcp ack's to unique macs).
+- Total of 10500+ unique network devices seen (unique mac addresses).
+- Lasts from Wednesday until Sunday. We switch to DST in the middle of it.
+- Active network devices at 2016-03-22T12:00:00: 206
+- Active network devices at 2016-03-23T08:00:00: 346
+- Active network devices at 2016-03-23T12:00:00: 2283
+- Active network devices at 2016-03-23T20:00:00: 6467
+- Ping 180+ switches and routers more than once per second. Store all
+ replies (and lack thereof).
+- Collect SNMP data once a minute from all network infrastructure every
+ minute (180+ devices + a dozen or so monitored linknets).
+- Collected about 30GB of data in postgresql.
+- About 300 million rows.
+- The NMS saw between 200 and 500 requests per second during normal
+ operation.
+- 99.99% cache hit rate.
+- Maybe 300 rows inserted per second. Most of these are COPY() of ping
+ replies (thus performs well).
+- Biggest CPU hog was the SNMP polling, but not an issue.
+- Numerous features developed during the event with no database changes,
+ mainly in the frontend, but also tweaking the API.
Name
----
@@ -60,20 +79,22 @@ localhost by default.
To use it, first set up ssh to localhost (or change host in inventory) and
install docker, then run::
- $ cd build/
- $ ansible-playbook -i test/inventory test/playbook-test.yml
+ $ ansible-playbook -i build/test/inventory build/test/playbook-test.yml
-This will build the relevant docker images and start them. It assumes a
-check out on the target machine (e.g.: localhost) on ``~/src/tgnms``. It
-does not use sudo or make any attempt to configure the local host beyond
-docker building.
+This will build the relevant docker images, start them and run a very
+simple tests to see that the front works. It assumes a check out on the
+target machine (e.g.: localhost) on ``~/src/tgnms``. It does not use sudo
+or make any attempt to configure the local host beyond docker building.
-PS: This is currently NOT complete, but will eventually run actual test
-cases and possibly provide a development environment. It is very likely to
-"move" to the top level, mainly to avoid having to check out the git repo,
-which creates cache issues with docker.
+It will set up 4 containers right now:
-It currently DOES work to actual set up a working NMS, save collectors.
+- Database
+- Frontend (apache)
+- Varnish
+- Collector with ping
+
+The IP of the Varnish instance is reported and can be used. The credentials
+used are 'demo/demo'.
Architecture
------------
@@ -111,8 +132,8 @@ Current state
As of this writing, tgnms is being split out of the original 'tgmanage'
repository. This means sweeping changes and breakage. The actual code has
-been used in "production" during The Gathering 2016, but is probably broken
-right now for simple organizational reasons.
+been used in "production" during The Gathering 2016, but is not very
+installable right now for practical reasons.
Check back in a week or eight.
@@ -151,5 +172,3 @@ and in detailed form in the private API.
The NMS it self does not implement any actual security mechanisms for the
API. That is left up to the web server. An example Apache configuration
file is provided.
-
-
diff --git a/build/test/nms-collector-test.Dockerfile b/build/test/nms-collector-test.Dockerfile
index 04f7a59..be6cb23 100644
--- a/build/test/nms-collector-test.Dockerfile
+++ b/build/test/nms-collector-test.Dockerfile
@@ -1,5 +1,5 @@
FROM debian:jessie
-RUN apt-get update && apt-get install -y git-core
+RUN apt-get update
RUN apt-get -y install \
libdata-dumper-simple-perl \
libdbd-pg-perl \
@@ -13,5 +13,7 @@ RUN apt-get -y install \
libjson-perl \
perl-base \
perl-modules
-RUN git clone https://github.com/tech-server/tgnms /opt/nms
+RUN mkdir -p /opt/nms
+ADD collectors /opt/nms/collectors
+ADD include /opt/nms/include
CMD /opt/nms/collectors/ping.pl
diff --git a/build/test/nms-db-test.Dockerfile b/build/test/nms-db-test.Dockerfile
index a182040..2e0e0ce 100644
--- a/build/test/nms-db-test.Dockerfile
+++ b/build/test/nms-db-test.Dockerfile
@@ -1,9 +1,9 @@
FROM debian:jessie
RUN apt-get update && apt-get install -y postgresql-9.4
-ADD test/pg_hba.tail /pg_hba.tail
+ADD build/test/pg_hba.tail /pg_hba.tail
RUN cat /pg_hba.tail >> /etc/postgresql/9.4/main/pg_hba.conf
RUN service postgresql start && su postgres -c "psql --command=\"CREATE ROLE nms PASSWORD 'risbrod' NOSUPERUSER NOCREATEDB NOCREATEROLE INHERIT LOGIN;\"" && su postgres -c "createdb -O nms nms" && service postgresql stop
-ADD schema.sql /schema.sql
+ADD build/schema.sql /schema.sql
RUN service postgresql start && su postgres -c "cat /schema.sql | psql nms" && service postgresql stop
RUN echo "listen_addresses = '*'" >> /etc/postgresql/9.4/main/postgresql.conf
CMD pg_ctlcluster --foreground 9.4 main start
diff --git a/build/test/nms-front-test.Dockerfile b/build/test/nms-front-test.Dockerfile
index 363ae97..256083a 100644
--- a/build/test/nms-front-test.Dockerfile
+++ b/build/test/nms-front-test.Dockerfile
@@ -1,5 +1,5 @@
FROM debian:jessie
-RUN apt-get update && apt-get install -y git-core
+RUN apt-get update
RUN apt-get -y install \
libcapture-tiny-perl \
libcommon-sense-perl \
@@ -29,7 +29,10 @@ RUN apt-get -y install \
libfreezethaw-perl \
apache2
-RUN git clone https://github.com/tech-server/tgnms /opt/nms
+RUN mkdir -p /opt/nms
+ADD web /opt/nms/web
+ADD include /opt/nms/include
+ADD extras /opt/nms/extras
RUN a2dissite 000-default
RUN a2enmod cgi
@@ -37,7 +40,7 @@ RUN cp /opt/nms/extras/misc/apache2.conf /etc/apache2/sites-enabled/nms.conf
RUN mkdir -p /opt/nms/etc
RUN echo 'demo:$apr1$IKrQYF6x$0zmRciLR7Clc2tEEosyHV.' > /opt/nms/etc/htpasswd-read
RUN echo 'demo:$apr1$IKrQYF6x$0zmRciLR7Clc2tEEosyHV.' > /opt/nms/etc/htpasswd-write
-ADD test/dummy-apache2.start /
+ADD build/test/dummy-apache2.start /
RUN chmod 0755 /dummy-apache2.start
CMD /dummy-apache2.start
EXPOSE 80
diff --git a/build/test/nms-varnish-test.Dockerfile b/build/test/nms-varnish-test.Dockerfile
index 45fea79..7e75b86 100644
--- a/build/test/nms-varnish-test.Dockerfile
+++ b/build/test/nms-varnish-test.Dockerfile
@@ -1,10 +1,8 @@
FROM debian:jessie
-RUN apt-get update && apt-get install -y git-core
+RUN apt-get update
RUN apt-get -y install varnish
-RUN git clone https://github.com/tech-server/tgnms /opt/nms
-
RUN rm /etc/varnish/default.vcl
-RUN cp /opt/nms/extras/misc/varnish.vcl /etc/varnish/default.vcl
+ADD extras/misc/varnish.vcl /etc/varnish/default.vcl
CMD varnishd -a :80 -f /etc/varnish/default.vcl -F
EXPOSE 80
diff --git a/build/test/playbook-test.yml b/build/test/playbook-test.yml
index b0e2c10..82df8ce 100644
--- a/build/test/playbook-test.yml
+++ b/build/test/playbook-test.yml
@@ -11,6 +11,17 @@
links: [ "nms-front-test:nms-front" ]
- name: "nms-collector-test"
links: [ "nms-db-test:db" ]
+ - simple_urls:
+ - "/api/public/switches"
+ - "/api/public/switch-state"
+ - "/api/public/ping"
+ - "/api/public/location"
+ - "/api/public/dhcp"
+ - "/api/public/dhcp-summary"
+ - read_urls:
+ - "/api/read/comments"
+ - "/api/read/snmp"
+ - "/api/read/switches-management"
tasks:
- name: make all
@@ -18,8 +29,8 @@
state: build
name: "{{ item.name }}"
docker_api_version: 1.18
- dockerfile: test/{{ item.name }}.Dockerfile
- path: "src/tgnms/build/"
+ dockerfile: build/test/{{ item.name }}.Dockerfile
+ path: "src/tgnms/"
with_items: "{{ images }}"
- name: stop all
@@ -28,6 +39,7 @@
state: stopped
image: "{{ item.name }}"
docker_api_version: 1.18
+ stop_timeout: 2
with_items: "{{ images }}"
- name: start all
@@ -40,4 +52,26 @@
links: "{{ item.links }}"
with_items: "{{ images }}"
+ - name: workaround to get nms-varnish-front-ip
+ shell: "docker inspect nms-varnish-test | grep IPAddress | sed 's/[^0-9.]//g'"
+ register: ip
+ - name: Display IP
+ debug:
+ msg: "Front is available at http://{{ ip.stdout }}/"
+
+ - name: test index
+ uri: url="http://{{ ip.stdout }}/"
+
+ - name: test public api without data
+ uri:
+ url: "http://{{ ip.stdout }}{{ item }}"
+ with_items: "{{ simple_urls }}"
+
+ - name: test read api without data
+ uri:
+ url: http://{{ ip.stdout }}{{ item }}
+ user: demo
+ password: demo
+ with_items: "{{ read_urls }}"
+