In virtual auditory environments, sound generation is typically based on a two-stage approach: synthesizing a monophonic signal, implicitly
equivalent to a point source, and simulating the acoustic space. The directivity, spatial distribution and position of the source can be
simulated thanks to signal processing applied to the monophonic sound. A one-stage synthesis/spatialization approach, taking into account
both timbre and spatial attributes of the source as low-level parameters, would achieve a better computational efficiency essential
for real-time audio synthesis in interactive environments. Such approach involves a careful examination of sound synthesis and
spatialization techniques to reveal how they can be connected together. This paper concentrates on the sinusoidal sound model and 3D
positional audio rendering methods. We present a real-time algorithm that combines Inverse Fast Fourier Transform (FFT-1) synthesis and
directional encoding to generate sounds whose sinusoidal components can be independently positioned in space. In addition to the
traditional frequency-amplitude-phase parameter set, partials positions are used to drive the synthesis engine. Audio rendering can be
achieved on a multispeaker setup, or in binaural over headphones, depending on the available reproduction system.