DeRelayL: Sustainable Decentralized Relay Learning

Haihan Duan, Tengfei Ma, Yuyang Qin, Runhao Zeng, Wei Cai, Victor C.M. Leung, Xiping Hu*

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

摘要

In the era of Big Data, large-scale machine learning models have revolutionized various fields, driving significant advancements. However, large-scale model training demands high financial and computational resources, which are only affordable by a few technological giants and well-funded institutions. In this case, common users like mobile users, the real creators of valuable data, are often excluded from fully benefiting due to the barriers, while the current methods for accessing largescale models either limit user ownership or lack sustainability. This growing gap highlights the urgent need for a collaborative model training approach, allowing common users to train and share models. However, existing collaborative model training paradigms, especially federated learning (FL), primarily focus on data privacy and group-based model aggregation. To this end, this paper intends to address this issue by proposing a novel training paradigm named decentralized relay learning (DeRelayL), a sustainable learning system where permissionless participants can contribute to model training in a relay-like manner and share the model.

源语言英语
期刊IEEE Transactions on Mobile Computing
DOI
出版状态已接受/待刊 - 2025

指纹

探究 'DeRelayL: Sustainable Decentralized Relay Learning' 的科研主题。它们共同构成独一无二的指纹。

引用此