Digital computing in neural networks: playing randomness against disorder
How unreliable neurons can store precisely any memory above their own learning capabilities? We hypothesize that the brain performs some kind of digital computing to overcome the unreliability of its neurons by exploiting random synaptic connections at its own advantage. Despite the learning limitation of neurons, which perform large errors, random connections help to generate high entropy, conveying maximum information simultaneously; i.e., maximum entropy. Theoretical equations and computer experiments show that surprisingly, only a small number of neurons is enough for coding sequences formed from items taken within a repertoire of a billion different values. This result led us to derive a general equation of the learning capabilities in neural networks, which has deep implications in neuroscience modeling and artificial intelligence toward developing a neural information theory.