DarSens: Distributed Activity Recognition from Acceleration Data

As the amount of sensors in our environment grows, it is feasible to reuse existing sensors. Traditional multi-sensor fusion strategies in place in todays sensor systems are not capable for this as they use a static configuration which cannot be reused. In order to create a sensor system which can overcome this limitation, it is essential that a sensor node can convey its capabilities and inquiry about other sensor node’s capabilities. In turn, sensor nodes can then cooperate to achieve a common sensing goal by combining their capabilities and e.g. recognize locomotion activities.

 DarSens Concept

This work describes the sensing goal initiated multi-sensor activity recognition approach and how it is implemented on an embedded system platform. As well as how mechanisms of self-organization, self-management and self-adaptation are incorporated in the approach, and the implementation and how they benefit the sensor system in terms of making best use of available sensors, efficient data gathering and delivery, using the processing capabilities of sensors and protecting the sensor ensemble against spontaneous and occasional sensor faults or breakdowns without disrupting the activity recognition process. The implementation of this approach is implemented in a framework called DarSens which is able to perform activity recognition on a body sensor network with the feature extraction and classification performed within the sensor network without needing to revert to the processing capabilities of a client device using the sensor ensemble. Three experiments have been conducted to show the feasibility and performance of the approach in a typically activity recognition task.