New Pub: Classification of producer characteristics in primate long calls using neural networks

Her findings are both methodologically and analytically intriguing. She used a novel machine learning model called an artificial neural network to condense the large number of measures taken on each vocalisation into a meaningful set that could then be used to see if each call encoded information about the call’s producer – either sex, age, or even individual identity.
Here is the paper’s abstract:
Primate long calls are high-amplitude vocalizations that can be critical in maintaining intragroup contact and intergroup spacing, and can encode abundant information about a call’s producer, such as age, sex, and individual identity. Long calls of the wild emperor (Saguinus imperator) and saddleback (Leontocebus weddelli) tamarins were tested for these identity signals using artificial neural networks, machine-learning models that reduce subjectivity in vocalization classification. To assess whether modelling could be streamlined by using only factors which were responsible for the majority of variation within networks, each series of networks was re-trained after implementing two methods of feature selection. First, networks were trained and run using only the subset of variables whose weights accounted for ≥50% of each original network’s variation, as identified by the networks themselves. In the second, only variables implemented by decision trees in predicting outcomes were used. Networks predicted dependent variables above chance (≥58.7% for sex, ≥69.2 for age class, and ≥38.8% for seven to eight individuals), but classification accuracy was not markedly improved by feature selection. Findings are discussed with regard to implications for future studies on identity signaling in vocalizations and streamlining of data analysis.
Screen Shot 2018-10-17 at 5.04.36 PM
Screen Shot 2018-10-17 at 5.04.49 PM.png
Robakis, E., Watsa, M. and Erkenswick, G., 2018. Classification of producer characteristics in primate long calls using neural networks. The Journal of the Acoustical Society of America144(1), pp.344-353.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s