ANN based respiration detection using UWB radar
MetadataShow full item record
Non-contact detection of human respiration has many possible uses, e.g. health monitoring for clinical institutions, homes or prisons, alarm systems, fire evacuation, industrial or home automation, and triggering of medical imaging. Novelda's Ultra Wide Band radar is able to detect respiration of humans and animals by measuring distance to the chest wall. This results in a characteristic Doppler-spectrum. One challenge is oscillating objects e.g. fans and ceiling lamps, which also give a similar Doppler-spectrum, but for many use-cases should not be treated i.e. classified as human respiration. Artificial Neural Networks (ANN) is a form of Artificial Intelligence (AI) and is an interesting and modern way of solving such classification problems that also seems suitable for this case, where training data is abundant. This thesis studies the use of Artificial Neural Networks to achieve the highest possible sensitivity and specificity for this classification problem. Throughout the project, a variety of artificial neural networks was tested using Matlab's "Neural Networks Toolbox". The training data was recorded by the author and Novelda employees using Novelda's XeThru UWB radar. The results show that by preprocessing the radar signals into time-vs-frequency images of signal energy, and then use a convolutional neural network as classifier, human respiration can be distinguished from other oscillating objects with a high sensitivity and specificity. The sensitivity and specificity achieved on the test data used for this study was respectively 99.1 % and 99.8 %, although these results will probably vary with the use of different test data. The filters of the convolutional layers of the CNNs and recordings that was reverse engineered from a trained CNN was also studied. These studies revealed that the CNN was actually looking for the variations in frequency found in natural human respiration but not in oscillating objects. Also, it only seemed to look for very local patterns, which may be a result of the relatively shallow architectures used here compared to the CNNs used for e.g. face recognition. Thoughts on future work are also discussed in this report. This includes discussions of why deeper CNNs could be suited for this problem, smarter use of some of the tested concepts like e.g. artificial expansion of training data, and the use of ANNs for discovering patterns for use in other types of classifiers.