This work introduces new results on early-vocal development in infants and machines using artificial intelligent agents. It is addressed using the perspective of intrinsically-motivated learning algorithms for autonomous exploration. The agent autonomously selects goals to explore its own sensorimotor system in regions where a certain competence measure is maximized. Unlike previous experiments, we propose to include a somatosensory model to provide a proprioceptive feedback to reinforce learning. We argue that proprioceptive feedback will drive the learning process more efficiently than algorithms taking into account only auditory feedback. Considering the proprioceptive feedback to generate a constraint model, which is unknown beforehand to the learner, guarantees that the agent is less prone to selecting goals that violated the system constraints in previous experiments.