Supervised machine learning is a growing assistive framework for professional decision-making. Yet bias that causes unfair discrimination has already been presented in the datasets. This research proposes a method to reduce model unfairness during the machine learning training process without altering the sample value or the prediction value. Using an objective function that identifies the biased feature with maximal correlation estimation, the method selects samples to train the updated classifier model. The quality of the sample selection determines the extent of unfairness reduction. With an adequate sample size, we demonstrate that the method is valid in reducing model unfairness without severely sacrificing classification accuracy. We tested our method on multiple benchmark datasets with demographic parity and feature independence as the notions for a statistically fair classification model.