Statistical inference often assumes that the distribution being sampled is normal. As observed, following the normal distribution assumption blindly may affect the accuracy of inference and estimation procedures. For this reason, many tests for normality have been proposed in the literature. This paper deals with the problem of testing normality in the case when data consists of a number of small independent samples such that in each small sample observations are independent and identically distributed while from sample to sample they have different parameters but the same type of distribution (call this multi-sample data). In this case it is necessary to use test statistics which do not depent on the parameters. A natural way to exclude the nuisance location parameter is to replace the observations within each small group by diferences. We obtain some estimates of stability of such a decomposition and study and compare the power of eight selected normality tests in the case of multi-sample data. The following tests are considered: the Pearson chi-square test, the Kolmogorov–Smirnov, the Cramer–von Mises, the Anderson-Darling, the Shapiro–Wilk, the Shapiro–Francia, the Jarque–Bera, and the adjusted Jarque–Bera tests. Power comparisons of these eight tests were obtained via the Monte Carlo simulation of sample data generated from several alternative distributions.