Recently, we have presented the 'fully' complex feed-forward neural network (FNN) using a subset of complex elementary transcendental functions (ETFs) as the nonlinear activation functions. In this paper, we show that folly complex FNNs can universally approximate any complex mapping to an arbitrary accuracy on a compact set of input patterns with probability 1. The proof is extended to a new family of complex activation functions possessing essential singularities. We discuss properties of the complex activation functions based on the types of their singularity and the implications of these to the efficiency and the domain of convergence in their applications.