Does neural networks help in signal processing?

Comments · 1977 Views

In this growing world, neural networks has found its application in every field. In this article one such instance will be seen in detail.

Neural Networks and Signal Processing being two streams, just had an idea whether they meet at any instance. And to my suprise could see so many applications of neural networks in signal processing. Neural networks is teaching a model to do a task by giving it analyse training examples. To make things easier in neural networks, plenty of algorithms has been developed. Common application of neural networks is recognizing objects of different kinds. Now signal processing handles with analysing signals for extraction of information about them. Its common applications are based on processing of signals which can be in any form as video, audio.

In Video Signal Processing:

This application shows how a neural network algorithm was build to remove the AM impluses from television signals. It is also observed that this method outperforms the conventional method. The input signals are analysed and processed with the help of neural networks algorithm. Here the nets are designed in such a way that there is no feature that is missed after processing.

In spatial signal processing:

Neural networks are able to estimate the direction of arrival simultaneously for any time given. This estimation uses both multilayer perceptron and Radial basis Function for estimating the arrival in narrowband signals. This also proved that using neural networks the results are better when compared to Music algorithm.

ECG classification:

QRS complexes were classified using a Multi Layer Perceptron (MLP) using a modified version of the original Backpropagation (BP) algorithm. The input to the network were bitmaps of the QRS complexes represented in the form of a 20 x 20 matrix. A number of hidden layers (and neurons in each hidden layer) were experimented upon to observe the rate at which the network converged. Larger Networks were observed to find the minimum on the error curve with ease. Increasing the network size beyond a certain size did not improve the performance rate, rather it decreased the performance rate. It is evident that there exists an optimal neural network architecture for every given problem. The weights change rules in Backpropagation algorithm were modified to include a variation of the relationship between momentum and learning rate to observe any increase in network's performance rate. A learning rate adaptation factor was introduced into the learning algorithm to decrease the network's chances of missing a minimum on the error curve. The network was found to perform extremely well with the modified version of the algorithm. The network converged after only 9000 learning cycles when compared to 14,000 cycles with the original algorithm.

 There are a lot more inventions emerging by merging these two streams which can be very helpful to humankind. The contents in my article are my views after learning the concepts from different publishments.

 

 

Comments