vowel

From MaRDI portal
Dataset:6033078



OpenML307MaRDI QIDQ6033078

OpenML dataset with id 307

No author found.

Full work available at URL: https://api.openml.org/data/v1/download/52210/vowel.arff

Upload date: 22 August 2014


Dataset Characteristics

Number of classes: 11
Number of features: 13 (numeric: 10, symbolic: 3 and in total binary: 1 )
Number of instances: 990
Number of instances with missing values: 0
Number of missing values: 0

Author: Peter Turney (peter@ai.iit.nrc.ca) Source: UCI - date unknown Please cite: UCI citation policy

Vowel Recognition (Deterding data) Speaker independent recognition of the eleven steady state vowels of British English using a specified training set of lpc derived log area ratios. Collected by David Deterding (data and non-connectionist analysis), Mahesan Niranjan (first connectionist analysis), Tony Robinson (description, program, data, and results)

A very comprehensive description including comments by the authors can be found here

The problem is specified by the accompanying data file, "vowel.data". This consists of a three dimensional array: voweldata [speaker, vowel, input]. The speakers are indexed by integers 0-89. (Actually, there are fifteen individual speakers, each saying each vowel six times.) The vowels are indexed by integers 0-10. For each utterance, there are ten floating-point input values, with array indices 0-9.

The problem is to train the network as well as possible using only on data from "speakers" 0-47, and then to test the network on speakers 48-89, reporting the number of correct classifications in the test set.

For a more detailed explanation of the problem, see the excerpt from Tony Robinson's Ph.D. thesis in the COMMENTS section. In Robinson's opinion, connectionist problems fall into two classes, the possible and the impossible. He is interested in the latter, by which he means problems that have no exact solution. Thus the problem here is not to see how fast a network can be trained (although this is important), but to maximise a less than perfect performance.

        1. METHODOLOGY

Report the number of test vowels classified correctly, (i.e. the number of occurences when distance of the correct output to the actual output was the smallest of the set of distances from the actual output to all possible target outputs).

Though this is not the focus of Robinson's study, it would also be useful to report how long the training took (measured in pattern presentations or with a rough count of floating-point operations required) and what level of success was achieved on the training and testing data after various amounts of training. Of course, the network topology and algorithm used should be precisely described as well.

        1. VARIATIONS

This benchmark is proposed to encourage the exploration of different node types. Please theorise/experiment/hack. The author (Robinson) will try to correspond by email if requested. In particular there has been some discussion recently on the use of a cross-entropy distance measure, and it would be interesting to see results for that.

        1. Notes

1. Each of these numbers is based on a single trial with random starting weights. More trials would of course be preferable, but the computational facilities available to Robinson were limited.

2. Graphs are given in Robinson's thesis showing test-set performance vs. epoch count for some of the training runs. In most cases, performance peaks at around 250 correct, after which performance decays to different degrees. The numbers given above are final performance figures after about 3000 trials, not the peak performance obtained during the run.

        1. REFERENCES

[Deterding89] D. H. Deterding, 1989, University of Cambridge, "Speaker

Normalisation for Automatic Speech Recognition", submitted for PhD.

[NiranjanFallside88] M. Niranjan and F. Fallside, 1988, Cambridge University

Engineering Department, "Neural Networks and Radial Basis Functions in
Classifying Static Speech Patterns", CUED/F-INFENG/TR.22.

[RenalsRohwer89-ijcnn] Steve Renals and Richard Rohwer, "Phoneme

Classification Experiments Using Radial Basis Functions", Submitted to
the International Joint Conference on Neural Networks, Washington,
1989.

[RabinerSchafer78] L. R. Rabiner and R. W. Schafer, Englewood Cliffs, New

Jersey, 1978, Prentice Hall, "Digital Processing of Speech Signals".

[PragerFallside88] R. W. Prager and F. Fallside, 1988, Cambridge University

Engineering Department, "The Modified Kanerva Model for Automatic
Speech Recognition", CUED/F-INFENG/TR.6.

[BroomheadLowe88] D. Broomhead and D. Lowe, 1988, Royal Signals and Radar

Establishment, Malvern, "Multi-variable Interpolation and Adaptive
Networks", RSRE memo, #4148.

[RobinsonNiranjanFallside88-tr] A. J. Robinson and M. Niranjan and F.

  Fallside, 1988, Cambridge University Engineering Department,
"Generalising the Nodes of the Error Propagation Network",
CUED/F-INFENG/TR.25.

[Robinson89] A. J. Robinson, 1989, Cambridge University Engineering

Department, "Dynamic Error Propagation Networks".

[McCullochAinsworth88] N. McCulloch and W. A. Ainsworth, Proceedings of

Speech'88, Edinburgh, 1988, "Speaker Independent Vowel Recognition
using a Multi-Layer Perceptron".

[RobinsonFallside88-neuro] A. J. Robinson and F. Fallside, 1988, Proceedings

of nEuro'88, Paris, June, "A Dynamic Connectionist Model for Phoneme
Recognition.


        1. Notes
  • This is version 2. Version 1 is hidden because it includes a feature dividing the data in train and test set. In OpenML this information is explicitly available in the corresponding task.