The Jimternet

James Stewart - Making Stuff

Controlling a TOSR0x USB Relay Module Using Python

As part of a forthcoming project to build a computerised thermostat, I require a means of programatically controlling a pair of relays to switch mains power.

The brains of my thermostat will be a Raspberry Pi. This credit-card-sized computer is overkill for such a project, but having access to a complete Linux environment will make it relatively simple to do interesting things such as produce graphs, send Twitter updates and expose temperatures via SNMP. It also allows me to code my thermostat in any language that I choose and in this case, I've chosen Python.

One of the Raspberry Pi's key features is its GPIO interface, allowing it to control all manner of electronics. However for v1 of my thermostat I want to focus on software rather than hardware, so I went looking for a relay controller with a USB interface. What I found was the TOSR0x:


There are comments.

Understanding Your Zenperfsnmp Event Queue

Zenoss' zenperfsnmp daemon generates a lot of events. In most cases it is the leading source of events by a significant margin.

Depending on the monitoring templates in place and the number of devices being monitored by Zenoss, zenperfsnmp may be raising thousands of events during each cycle. Before being processed by the event engine these events are held in a queue, the length of which is determined by the config parameter maxqueuelen.

If the queue of events exceeds maxqueuelen, new events are dropped indiscriminately. This is obviously undesirable, even if it happens only occasionally. But when your zenperfsnmp event queue looks like this...

Zenperfsnmpd Events're likely to be consistently dropping events.


There are comments.

Installing Flume 0.9.4 Example Plugins

As part of a project for my day job, I've been getting to grips with Flume. Chances are that if you've found this post, you're already aware of what Flume does, but for the uninitiated:

Flume is a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large amounts of log data. Its main goal is to deliver data from applications to Hadoop’s HDFS. It has a simple and flexible architecture based on streaming data flows. It is robust and fault tolerant with tunable reliability mechanisms and many failover and recovery mechanisms. The system is centrally managed and allows for intelligent dynamic management. It uses a simple extensible data model that allows for online analytic applications.

The work that I'm doing requires me to manipulate events as they traverse a data flow. To do this I will extend Flume using its plugin functionality and a custom Decorator:

Sink decorators can add properties to the sink and can modify the data streams that pass through them. For example, you can use them to increase reliability via write ahead logging, increase network throughput via batching/compression, sampling, benchmarking, and even lightweight analytics.


There are comments.

Page 3 / 3