
		<paper>
			<loc>https://jjcit.org/paper/284</loc>
			<title>A SCALABLE FEDERATED DEEP REINFORCEMENT LEARNING ARCHITECTURE FOR COLLABORATIVE LEARNING</title>
			<doi>10.5455/jjcit.71-1767004702</doi>
			<authors>Tarek Amine Haddad</authors>
			<keywords>Federated learning,Deep reinforcement learning,Collaborative learning,Distributed intelligence,Scalability,Adaptive aggregation</keywords>
			<views>431</views>
			<downloads>103</downloads>
			<received_date>29-Dec.-2025</received_date>
			<revised_date>  8-Feb.-2026</revised_date>
			<accepted_date>  10-Feb.-2026</accepted_date>
			<abstract>Federated Learning enables collaborative model training without sharing raw data, while Deep Reinforcement 
Learning provides powerful mechanisms for sequential decision-making. However, their integration suffers from 
limited scalability, sensitivity to non-IID data and unstable convergence in distributed environments. This paper 
proposes a Scalable Federated Deep Reinforcement Learning (SFDRL) architecture in which distributed agents 
learn local policies and periodically contribute to a global model via an adaptive, performance-aware aggregation 
strategy. Unlike conventional FedRL methods that rely on uniform averaging, SFDRL weights local updates 
according to their learning effectiveness, resulting in faster convergence and improved stability under 
heterogeneous data distributions. In addition, a selective communication mechanism is introduced to reduce 
communication overhead by up to 28% and 64% compared with FedAvg and FedRL, respectively. Extensive 
experiments demonstrate that SFDRL outperforms compared methods, achieving higher cumulative rewards, 
reduced variance during training and improved scalability in large-scale distributed settings. These results 
confirm the suitability of SFDRL for practical deployment in distributed intelligent systems.</abstract>
		</paper>


