Federated learning is an emerging distributed machine learning framework that allows edge devices to co-train global models without uploading their own data to a central server, which protects users’ data privacy. However, the problem of federated learning is that there is too much heterogeneity between users, including the size of the data, the different model structure, and the quality of the device. This problem leads to the need for more communication to train a better model, which also increases the cost of communication. To solve this problem, we developed the Fed-BNGC algorithm. First of all, the algorithm can perform the first screening according to the difference between the user’s system and the amount of data, and select the users with poor system evaluation value, and these users will not participate in this training. Secondly, the second screening is carried out according to the difference of the user’s model parameters, and the user with the greatest difference from other users is selected and eliminated. Finally, the K-means++ clustering algorithm is used to cluster users with similar model parameters, and a model that is more suitable for this clustering user is jointly trained. Compared with the Fedavg and Fedprox algorithms, the accuracy of our method is improved to varying degrees, and the number of communication is greatly reduced.