reine Buchbestellungen ab 5 Euro senden wir Ihnen Portofrei zuDiesen Artikel senden wir Ihnen ohne weiteren Aufpreis als PAKET

Learning with Recurrent Neural Networks
(Englisch)
Lecture Notes in Control and Information Sciences 254
Barbara Hammer

Print on Demand - Dieser Artikel wird für Sie gedruckt!

44,95 €

inkl. MwSt. · Portofrei
Dieses Produkt wird für Sie gedruckt, Lieferzeit ca. 14 Werktage
Menge:

Learning with Recurrent Neural Networks

Seiten
Erscheinungsdatum
Auflage
Ausstattung
Erscheinungsjahr
Sprache
Abbildungen
Serienfolge
alternative Ausgabe
Hersteller
Vertrieb
Kategorie
Buchtyp
Warengruppenindex
Warengruppe
Features
Laenge
Breite
Hoehe
Gewicht
Relevanz
Referenznummer
Moluna-Artikelnummer

Produktbeschreibung

The book details a new approach which enables neural networks to deal with symbolic data, folding networks
It presents both practical applications and a precise theoretical foundation
Folding networks, a generalisation of recurrent neural networks to tree structured inputs, are investigated as a mechanism to learn regularities on classical symbolic data, for example. The architecture, the training mechanism, and several applications in different areas are explained. Afterwards a theoretical foundation, proving that the approach is appropriate as a learning mechanism in principle, is presented: Their universal approximation ability is investigated- including several new results for standard recurrent neural networks such as explicit bounds on the required number of neurons and the super Turing capability of sigmoidal recurrent networks. The information theoretical learnability is examined - including several contribution to distribution dependent learnability, an answer to an open question posed by Vidyasagar, and a generalisation of the recent luckiness framework to function classes. Finally, the complexity of training is considered - including new results on the loading problem for standard feedforward networks with an arbitrary multilayered architecture, a correlated number of neurons and training set size, a varying number of hidden neurons but fixed input dimension, or the sigmoidal activation function, respectively.
Recurrent and folding networks.- Approximation ability.- Learnability.- Complexity.- Conclusion.
Folding networks, a generalisation of recurrent neural networks to tree structured inputs, are investigated as a mechanism to learn regularities on classical symbolic data, for example. The architecture, the training mechanism, and several applications in different areas are explained. Afterwards a theoretical foundation, proving that the approach is appropriate as a learning mechanism in principle, is presented: Their universal approximation ability is investigated - including several new results for standard recurrent neural networks such as explicit bounds on the required number of neurons and the super Turing capability of sigmoidal recurrent networks. The information theoretical learnability is examined - including several contribution to distribution dependent learnability, an answer to an open question posed by Vidyasagar, and a generalisation of the recent luckiness framework to function classes. Final ly, the complexity of training is considered - including new results on the loading problem for standard feedforward networks with an arbitrary multilayered architecture, a correlated number of neurons and training set size, a varying number of hidden neurons but fixed input dimension, or the sigmoidal activation function, respectively.
Introduction, Recurrent and Folding Networks: Definitions, Training, Background, Applications.- Approximation Ability: Foundationa, Approximation in Probability, Approximation in the Maximum Norm, Discussions and Open Questions.- Learnability: The Learning Scenario, PAC Learnability, Bounds on the VC-dimension of Folding Networks, Consquences for Learnability, Lower Bounds for the LRAAM, Discussion and Open Questions.- Complexity: The Loading Problem, The Perceptron Case, The Sigmoidal Case, Discussion and Open Questions.- Conclusion.

Inhaltsverzeichnis



Recurrent and folding networks.- Approximation ability.- Learnability.- Complexity.- Conclusion.


Klappentext



Folding networks, a generalisation of recurrent neural networks to tree structured inputs, are investigated as a mechanism to learn regularities on classical symbolic data, for example. The architecture, the training mechanism, and several applications in different areas are explained. Afterwards a theoretical foundation, proving that the approach is appropriate as a learning mechanism in principle, is presented: Their universal approximation ability is investigated- including several new results for standard recurrent neural networks such as explicit bounds on the required number of neurons and the super Turing capability of sigmoidal recurrent networks. The information theoretical learnability is examined - including several contribution to distribution dependent learnability, an answer to an open question posed by Vidyasagar, and a generalisation of the recent luckiness framework to function classes. Finally, the complexity of training is considered - including new results on the loading problem for standard feedforward networks with an arbitrary multilayered architecture, a correlated number of neurons and training set size, a varying number of hidden neurons but fixed input dimension, or the sigmoidal activation function, respectively.




The book details a new approach which enables neural networks to deal with symbolic data, folding networks

It presents both practical applications and a precise theoretical foundation



Datenschutz-Einstellungen