![gif machine learning gif machine learning](https://dashee87.github.io/images/mean_shift_search.gif)
These networks, unlike feedforward nets, allow information to be passed recursively within the network. That’s why we had to create a neural network with a “memory”. Our memory is critical to how we store, process, and understand content. Teaching neural nets to process information in the same way as a human brain is not a simple task.įor starters, they don’t have our capacity to remember information. We have millions of years of evolution on our side, with a million brain cells constantly firing to process, store and retrieve information. How easily humans can classify sequential information is deceptive. That is, our brain can instantly classify sequential information on a receipt, like so: Introducing “memory" to the net. All receipts are formatted similarly, so our brains instantly know where to find the merchant’s name, their address, the grand total, and so on. That’s because our brains are trained to recognize patterns in the whole sequence. Humans can take a look at a receipt and immediately process the relevant information and what it means. For everything presented on a receipt, we have a desired output: the right date, the time, the merchant name, products, SKUs, return date- you get the idea.
![gif machine learning gif machine learning](https://schoolofdisruption.com/wp-content/uploads/2018/07/Cat-or-Dog.gif)
What we’re trying to do at Sensibill is train our machine to understand these patterns basically, to learn the syntax of receipts. A price has no meaning if it’s not attached to an item or a total. The information is contained in the sequence itself - the string of digits and characters. Characters on a receipt have no meaning in isolation. Good robot! Bad robot! Reading a receipt may not be complicated per se, but it still requires us to understand basic sequential information. Previous information, or context, matters to us. And it needs to keep remembering those words as it continues to process new words. For example, in order to understand what you’re reading right now, your brain needs to remember what it read one word earlier, and one word before that. Most interesting problems require understanding something in context.
![gif machine learning gif machine learning](https://i0.wp.com/analyticsrusers.blog/wp-content/uploads/2018/07/SVM-new1.gif)
In other words, a feedforward network can only learn to consider the current input it is exposed to, without any context of the past or the future. Although feedforward networks have shown great potential for straightforward problems like image classification, they are lackluster when it comes to solving problems that require the network to “remember” information over time. Information flows strictly forward – from the inputs, through to the outputs. Feedforward neural networks are "simple" nets.