Text-to-Speech (TTS) synthesis systems produce speech from an input text. Corpus based or unit selection TTS (US-TTS) are based on retrieving the best set of speech units from a large labelled speech database. To that effect, the unit selection is guided by dynamic programming and a weighted cost function. Several weight tuning approaches have been defined so as to integrate human preferences in the unit selection process, but with no great success beyond expert-based hand tuning. However, active interactive genetic algorithms (aiGAs) have showed promising results working on the unit selection text-to-speech (US-TTS) weight tuning problem in previous works. aiGAs are an evolution of classic interactive genetic algorithms (IGAs) in terms of reducing fatigue, ambiguity and frustration in user's evaluations. This paper presents a step further in the application of aiGAs to this problem by defining new indicators of the perceptually-based evolutionary process to obtain more reliable weights. The experiments have been conducted on one hour Spanish speech database and using an acoustic plus linguistic cost function.