On the capabilities of multilayer perceptrons (Q1105387): Difference between revisions
From MaRDI portal
ReferenceBot (talk | contribs) Changed an Item |
Set OpenAlex properties. |
||
Property / full work available at URL | |||
Property / full work available at URL: https://doi.org/10.1016/0885-064x(88)90020-9 / rank | |||
Normal rank | |||
Property / OpenAlex ID | |||
Property / OpenAlex ID: W2012903341 / rank | |||
Normal rank |
Latest revision as of 08:33, 30 July 2024
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | On the capabilities of multilayer perceptrons |
scientific article |
Statements
On the capabilities of multilayer perceptrons (English)
0 references
1988
0 references
What is the smallest multilayer perceptron able to compute arbitrary and random functions? Previous results show that a net with one hidden layer containing N-1 threshold units is capable of implementing an arbitrary dichotomy of N points. A construction is presented here for implementing an arbitrary dichotomy with one hidden layer containing \(\lceil N/d\rceil\) units, for any set of N points in general position in d dimensions. This is in fact the smallest such net as dichotomies which cannot be implemented by any net with fewer units are described. Several constructions are presented of one-hidden-layer nets implementing arbitrary functions into the e-dimensional hypercube. One of these has only \(\lfloor 4N/d\rfloor \lceil e/\lfloor \log_ 2(N/d)\rfloor \rceil\) units in its hidden layer. Arguments based on a function counting theorem of \textit{T. M. Cover} [IEEE Trans. Electron. Comput. EC-14, 326-334 (1965; Zbl 0152.182)] establish that any net implementing arbitrary functions must have at least N e/log\({}_ 2(N)\) weights, so that no net with one hidden layer containing less than N e/(d log\({}_ 2(N))\) units will suffice. Simple counts also show that if the weights are only allowed to assume one of \(n_ g\) possible values, no net with fewer than N e/log\({}_ 2(n_ g)\) weights will suffice. Thus the gain coming from using real valued synapses appears to be only logarithmic. The circuit implementing functions into the e hypercube realizes such logarithmic gains. Since the counting arguments limit below only the number of weights, the possibility is suggested that, if suitable restrictions are imposed on the input vector set to avoid topological obstructions, two-hidden-layer nets with O(N) weights but only O(\(\sqrt{N})\) threshold units might suffice for arbitrary dichotomies. Interesting and potentially sufficient restrictions include (a) if the vectors are binary, i.e., lie on the d hypercube, or (b) if they are randomly and uniformly selected from a bounded region.
0 references
multilayer perceptron
0 references
dichotomy
0 references
hidden layer
0 references