The deep learning model used in this project is Hierarchical Neural Attention Encoder. The encoder takes into consideration all the previous network attack events so that it can learn by itself and predict accordingly. The baseline models propose LSTM to solve this problem. While LSTM has proven successful for sequential data input problems, it has many problems. One of the biggest drawbacks of LSTM is that it can only compute up to hundred input sequences at time. Also ,since, it is based on RNN, it consumes a lot of system resources due to its complex structure. Therefore, this project proposes designing Hierarchical Neural Attention Encoder which would provide an advantage over the tradition LSTM models. Hierarchical Neural Attention Encoder consists of Bidirectional RNN coupled with Attention Network. The Bidirectional RNN will be used to determine the meaning of sequence of words and the Attention network would be used to return weights for each word. Attention network identifies only important features from all other noisy features which are not required in analysis. It also has higher computational efficiency of computing 10,000 input vectors at a time. The model will be initially trained on large sets of data gathered from different sources like acunetix logs and logs generated by python script. Compared to LSTM model, this model based on Attention Neural Network gives better precision and Accuracy on Attack Prediction
Are we moving towards another recession? Economic impact of Corona Virus (COVID-19) on Business in 2020.
By Raj Vaibhav
- Edge Analytics in Transportation and Logistics Space: A Case Study
Tesla: Analysis of Smart, Connected Products
By Gaurav Singh
UX Case Study on Smart Home App
By Darshan K
Internet of Things: Boon or Bane?
By Honey Pandey