The aim of this article is to classify children's affective states in a real-life non-prototypical emotion recognition scenario. The framework is the same as that proposed in the Interspeech 2009 Emotion Challenge. We used a large set of acoustic features and five linguistic parameters based on the concept of emotional salience. Features were extracted from the spontaneous speech recordings of the FAU Aibo Corpus and their transcriptions. We used a wrapper method to reduce the acoustic set of features from 384 to 28 elements and feature-level fusion to merge them with the set of linguistic parameters. We studied three classification approaches: a Naïve-Bayes classifier, a support vector machine and a logistic model tree. Results show that the linguistic features improve the performances of the classifiers that use only acoustic datasets. Additionally, merging the linguistic features with the reduced acoustic set is more effective than working with the full dataset. The best classifier performance is achieved with the logistic model tree and the reduced set of acoustic and linguistic features, which improves the performance obtained with the full dataset by 4.15 % absolute (10.14 % relative) and improves the performance of the Naïve-Bayes classifier by 9.91 % absolute (28.18 % relative). For the same conditions proposed in the Emotion Challenge, this simple scheme slightly improves a much more complex structure involving seven classifiers and a larger number of features.