Recommendation systems play a crucial role in providing web-based suggestion utilities by leveraging user behavior, preferences, and interests. In the context of privacy concerns and the proliferation of handheld devices, federated recommender systems have emerged as a promising solution. These systems allow each client to train a local model and exchange only the model updates with a central server, thus preserving data privacy. However, certain use cases necessitate the deduction of contributions from specific clients, a process known as “unlearning.” Existing machine unlearning methods are designed for centralized settings and do not cater to the collaborative nature of recommendation systems, thereby overlooking their unique characteristics. This article proposes CFRU, a novel federated recommendation unlearning model that enables efficient and certified removal of target clients from the global model. Instead of retraining the model, our approach rolls back and eliminates the historical updates associated with the target client. To efficiently store the learning process’s historical updates, we propose sampling strategies that reduce the number of historical updates, retaining only the most significant ones. Furthermore, we analyze the potential bias introduced by the removal of target clients’ updates at each training round and establish an estimation using the Lipschitz condition. Leveraging this estimation, we propose an efficient iterative scheme to accumulate the bias across all rounds, compensating for the removed updates from the global model and recovering its utility without requiring post-training steps. Extensive experiments conducted on two real-world datasets, incorporating two poison attack scenarios, have shown that our unlearning technique can achieve a model quality that is 99.3% equivalent to retraining the model from scratch while performing up to 1,000 times faster.