Distributed polling allows the distribution of polling and discovery functions to across multiple servers and virtual machines. This allows for scaling of polling platforms beyond a single system. See Partitioned Polling for a similar system allowing polling around topology and within separated networks.
Each distributed poller requires access to the MySQL database and RRD files on the main Observium system. Access to RRDs can be achieved either by using remote filesystem mounting or by using remote RRDcached. MySQL access is achieved by allowing the distributed poller access to the primary Observium database.
A distributed poller domain is set up by configuring a common "Poller Count" setting on all poller nodes via the poller and discovery option
-i and a unique instance number on each poller node via the
-n option. Instance numbers start from zero.
This assumes that RRDCached has already been set up on the master system following the RRDCached guide.
Verify this is the case by making sure RRDCached is listening on an external interface
netstat -anp | grep rrdcached
You should see at least tcp socket on
0.0.0.0:42217 and sometimes also a tcp6 socket on
tcp 0 0 0.0.0.0:42217 0.0.0.0:* LISTEN 926/rrdcached tcp6 0 0 :::42217 :::* LISTEN 926/rrdcached unix 2 [ ACC ] STREAM LISTENING 27976 926/rrdcached /var/run/rrdcached.sock
On the master system, configure the MySQL server to listen for connections externally by changing the
bind-address entry in the MySQL config
On Ubuntu 18.04 this is in /etc/mysql/mysql.conf.d/mysqld.cnf
You can either comment the entry out to listen on all interfaces
#bind-address = 127.0.0.1
Or change the setting to a specific IP address to listen on only that interface
bind-address = 192.168.1.253
Restart the MySQL server
service mysql restart
Verify that the MySQL server is now listening on either all interfaces or the IP specified
netstat -anp | grep mysql
Output should show a listening socket on :::3306, 0.0.0.0:3306 or
tcp6 0 0 :::3306 :::* LISTEN 2220/mysqld unix 2 [ ACC ] STREAM LISTENING 40297 2220/mysqld /var/run/mysqld/mysqld.sock
The easiest way to set up a poller node is to follow the existing install guide or install script and skip the web section at the end. If you use the install script, you can remove the MySQL server after installation.
Create a MySQL username on master to permit the slave access to the database
Run the following command on the master system to grant the slave access to the MySQL database
mysql -uroot -p"<MYSQL_ROOT_PASSWORD>" -e "GRANT ALL PRIVILEGES ON observium.* TO 'observium'@'<SLAVE_IP>' IDENTIFIED BY '<SLAVE_MYSQL_PASSWORD>'"
If your MySQL server is not accessible from external networks, you can allow all slaves to use the same MySQL username and password entry by setting
/opt/observium/config.php to point towards the master system's MySQL and RRDCached ports.
$config['db_host'] = '<host>'; $config['db_port'] = '<port>'; $config['rrdcached'] = "<host>:<port>";
If this is configured correctly, you'll be able to test poll a device with
./poller.php -h <hostname> -m system -d. Check if MySQL is working correctly it'll start polling the device, if rrdcached is working correctly it'll successfully write data to any RRDs (look for rrdtool commands in the output).
Disable unused cron jobs
You should remove housekeeping CRON jobs from remote pollers. These are processes that act on the MyuSQL and RRD databases, and can be run from the most central system.
# Run housekeeping script daily for syslog, eventlog and alert log 13 5 * * * observium /opt/observium/housekeeping.php -ysel >> /dev/null 2>&1 # Run housekeeping script daily for rrds, ports, orphaned entries in the database and performance data 47 4 * * * observium /opt/observium/housekeeping.php -yrptb >> /dev/null 2>&1
These can be safely removed from any distributed (or partitioned) poller.
If you're setting up Partitioned pollers, you should return to that guide now. The next part is only for Distributed pollers.
We need to modify the cron jobs to tell Observium how many poller nodes there are and which poller node it's running on.
You can add
-i <count of poller nodes> -n <number of this poller, starting from zero> to the poller and discovery cron jobs.
33 */6 * * * observium /opt/observium/observium-wrapper discovery >> /dev/null 2>&1 */5 * * * * observium /opt/observium/observium-wrapper discovery --host new >> /dev/null 2>&1 */5 * * * * observium /opt/observium/observium-wrapper poller >> /dev/null 2>&1
For example, to configure a node to be the third node in a three node poller, after modification the entries would look like this
33 */6 * * * observium /opt/observium/observium-wrapper discovery -i 3 -n 2 >> /dev/null 2>&1 */5 * * * * observium /opt/observium/observium-wrapper discovery --host new -i 3 -n 2 >> /dev/null 2>&1 */5 * * * * observium /opt/observium/observium-wrapper poller -i 3 -n 2 >> /dev/null 2>&1
If you're adding nodes to an existing setup, be sure to modify any already existing pollers and be sure the options are correct! You can verify that things are being polled correctly by monitoring the