Journal article icon

Journal article

Fast learning of biased patterns in neural networks.

Abstract:
Usual neural network gradient descent training algorithms require training times of the same order as the number of neurons N if the patterns are biased. In this paper, modified algorithms are presented which require training times equal to those in unbiased cases which are of order 1. Exact convergence proofs are given. Gain parameters which produce minimal learning times in large networks are computed by replica methods. It is demonstrated how these modified algorithms are applied in order to produce four types of solutions to the learning problem: 1. A solution with all internal fields equal to the desired output, 2. The Adaline (or pseudo-inverse) solution, 3. The perceptron of optimal stability without threshold and 4. The perceptron of optimal stability with threshold.

Actions


Access Document


Publisher copy:
10.1142/S0129065793000183

Authors


More by this author
Institution:
University of Oxford
Division:
MPLS
Department:
Physics
Sub department:
Theoretical Physics
Role:
Author


Journal:
International journal of neural systems More from this journal
Volume:
4
Issue:
3
Pages:
223-230
Publication date:
1993-09-01
DOI:
EISSN:
1793-6462
ISSN:
0129-0657


Language:
English
Keywords:
Pubs id:
pubs:16970
UUID:
uuid:af7921e1-e727-4b1a-b626-b05e3baea39e
Local pid:
pubs:16970
Source identifiers:
16970
Deposit date:
2013-02-20

Terms of use



Views and Downloads






If you are the owner of this record, you can report an update to it here: Report update to this record

TO TOP