This study addresses the problem of building synthesis tool controlled by "what the sound should evoke". As a first approach, we considered sounds evoking motion and we addressed 3 main questions: What are the different categories of motion? What are the common points between sounds in a category? How to build a synthesis model that evokes specific motions? To define categories of motion, we gathered samples used by electro-acoustic music composers as a framework for their compositions. Then we effectuated a two part categorization task. The first part consisted in a free categorization task where the listeners were asked to focus their attention on the motions evoked by the sounds. The second part consisted in a constrained categorization task where subjects should sort sounds into predefined categories, characterized by prototypical sounds (defined by the free categorization test). To identify similarities between sounds in a category, we compute several signal descriptors and select the relevant with a feature selection method.Finally, building a synthesis tool implies a calibration step where a range of values must be defined for each descriptor. Then the inverse problem must be solved. These aspects are currently being investigated.