In this paper, we propose a new representation of human emotion through the fusion of physiological signals. Using the variance of these signals, the proposed method increases the effect of signals that contribute to the recognition accuracy, while decreasing the effect of those that do not. The new representation is a powerful approach to recognizing emotions. We investigate this by comparing against emotion recognition results from non-fused physiological signals. Both the fused and non-fused signals are used to train feedforward neural networks to recognize a range of emotion. We show that the fused method outperforms each individual signal across all emotions tested. We test the efficacy of the proposed approach on two publicly available datasets, namely BP4D+ and DEAP, showing state-of-the-art results on both. To the best of our knowledge this is the first work to present emotion recognition results using physiological signals on all subjects from BP4D+.