Assembly output codes for learning neural networks - Université de Rennes Accéder directement au contenu
Communication Dans Un Congrès Année : 2016

Assembly output codes for learning neural networks

Résumé

Neural network-based classifiers usually encode the class labels of input data via a completely disjoint code, i.e. a binary vector with only one bit associated with each category. We use coding theory to propose assembly codes where each element is associated with several classes, making for better target vectors. These codes emulate the combination of several classifiers, which is a well-known method to improve decision accuracy. Our experiments on data-sets such as MNIST with a multi-layer neural network show that assembly output codes, which are characterized by a higher minimum Hamming distance, result in better classification performance. These codes are also well suited to the use of clustered clique-based networks in category representation.
Fichier principal
Vignette du fichier
PapierISTC.pdf (566.17 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-01502488 , version 1 (05-04-2017)

Identifiants

Citer

Philippe Tigréat, Carlos Rosar Kos Lassance, Xiaoran Jiang, Vincent Gripon, Claude Berrou. Assembly output codes for learning neural networks. ISTC 2016 - 9th International Symposium on Turbo Codes & Iterative Information Processing, Sep 2016, Brest, France. pp.285-289, ⟨10.1109/ISTC.2016.7593122⟩. ⟨hal-01502488⟩
341 Consultations
492 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More