The basis for any kind of AI development is BIG DATASET. The performance of any AI based application depends on the data supplied.
ANN models are also known as Learning models and are used for prediction purposes. These are mostly developed without paying much cognizance to the size of datasets that can produce models of high accuracy and better generalization. Although, the general belief is that, large dataset is needed to construct a predictive learning model. To describe a data set as large in size, perhaps, is circumstance dependent, thus, what constitutes a dataset to be considered as being big or small is somehow vague.
In fact, the quantity of data partitioned for the purpose of training must be of good representation of the entire sets and sufficient enough to span through the input space. It must be authentic and relevant to give better model performance.