I am grateful to the National Science Foundation (NSF) and its division of Computer and Network Systems (CNS) for supporting my CAREER project which started in February 2022. This webpage is dedicated to documenting all details of this project.
NSF CNS-2146171
CAREER: From Federated to Fog Learning: Expanding the Frontier of Model Training in Heterogeneous Networks
Synopsis
Today’s networked systems are undergoing fundamental transformations as the number of Internet connected devices continues to scale. Fueled by the volumes of data generated, there has been in parallel a rise in complexity of machine learning (ML) algorithms envisioned for edge intelligence, and a desire to provide this intelligence in near real-time. However, contemporary techniques for distributing ML model training encounter critical performance issues due to two salient properties of the wireless edge: (1) heterogeneity in device communication/computation resources and (2) statistical diversity across locally collected datasets. These properties are further exacerbated when considering the geographic scale of the Internet of Things (IoT), where the cloud may be coordinating millions of heterogeneous devices.

This project is establishing the foundation for fog learning, a new model training paradigm that orchestrates computing resources across the cloud-to-things continuum to elevate and optimize over the fundamental model learning vs. resource efficiency tradeoff. The driving principle of fog learning is to intelligently distribute federated model aggregations throughout a multi-layer network hierarchy. The proliferation of device-to-device (D2D) communications in wireless protocols including 5G-and-beyond will act as a substrate for inexpensive local synchronization of datasets and model updates.
This project is leading to a concrete understanding of the fundamental relationships between contemporary fog network architectures and ML model training. The orchestration of device-level, fog-level, and cloud-level decision-making will expand the limits of distributed training performance under resource heterogeneity and statistical diversity. In addition to its focus on optimizing ML training through networks, this project is developing innovative ML techniques leveraging domain knowledge to support and enhance these optimizations at scale.
This project has both technical and educational broader impacts. The elevated model learning vs. resource efficiency tradeoff achieved will result in lower mobile energy consumption from emerging edge intelligence tasks and better quality of experience for end users. Further, the results of this project will motivate new research directions in (1) ML, based on heterogeneous system constraints, and (2) distributed computing for other applications. The educational broader impacts will promote the importance of data-driven optimization for/by network systems. They are being achieved through new undergraduate and graduate courses with personalized learning modules on specific topics.
Personnel
The following individuals are the current main personnel involved in the project:
- Christopher G. Brinton, Principal Investigator, Purdue University
- Henry (Su) Wang, Graduate Student, Purdue University
- Frank Lin, Graduate Student, Purdue University
- Rohit Parasnis, Postdoctoral Research Associate, Purdue University
Collaborators
I am thankful to have several collaborators on both the research and education development efforts:
- Seyyedali Hosseinalipour (Ali Alipour), University at Buffalo
- Andrew S. Lan, University of Massachusetts Amherst
- Taejoon Kim, University of Kansas
- Nicolo Michelusi, Arizona State University
- Rajeev Sahay, Saab Inc
Publications
The following is a list of publications produced since the start of the project, ordered chronologically:
- J. Kim, S. Hosseinalipour, A. Marcum, T. Kim, D. Love, C. Brinton. Learning-Based Adaptive IRS Control with Limited Feedback Codebooks. IEEE Transactions on Wireless Communications, 2022.
- D. Nguyen, S. Hosseinalipour, D. Love, P. Pathirana, C. Brinton. Latency Optimization for Blockchain Empowered Federated Learning in Multi-Server Edge Computing. IEEE Journal on Selected Areas in Communications, 2022.
- J. Kim, S. Hosseinalipour, T. Kim, D. Love, C. Brinton. Linear Coding for Gaussian Two-Way Channels. Annual Allerton Conference on Communication, Control, and Computing (Allerton), 2022.
- Y. Chu, S. Hosseinalipour, E. Tenorio, L. Cruz, K. Douglas, A. Lan, C. Brinton. Mitigating Biases in Student Performance Prediction via Attention-Based Personalized Federated Learning. Conference on Information and Knowledge Management (CIKM), 2022.
- N. Yang, S. Wang, M. Chen, C. Brinton, C. Yin, W. Saad, S. Cui. Model-Based Reinforcement Learning for Quantized Federated Learning Performance Optimization. IEEE Global Communications Conference (Globecom), 2022.
- S. Wang, S. Hosseinalipour, M. Gorlatova, C. Brinton, M. Chiang. UAV-assisted Online Machine Learning over Multi-Tiered Networks: A Hierarchical Nested Personalized Federated Learning Approach IEEE Transactions on Network and Service Management, 2022.
- D. Han, D. Kim, M. Choi, C. Brinton, J. Moon. SplitGP: Achieving Both Generalization and Personalization in Federated Learning. IEEE International Conference on Computer Communications (INFOCOM), 2023.
- S. Wang, R. Sahay, C. Brinton. How Potent are Evasion Attacks for Poisoning Federated Learning-Based Signal Classifiers? IEEE International Conference on Communications (ICC), 2023.
- Z. Zhou, S. Azam, C. Brinton, D. Inouye. Efficient Federated Domain Translation. International Conference on Learning Representations (ICLR), 2023.
- • D. Kushwaha, S. Redhu, C. Brinton, R. Hedge. Optimal Device Selection in Federated Learning for Resource Constrained Edge Networks. IEEE Internet of Things Journal, 2023.
GitHub Code Repositories
The following is a list of Github repositories created based on the research efforts in this project:
- UAV-assisted Online Machine Learning over Multi-Tiered Networks
- Semi-Decentralized Fog Learning
- Physical-Layer Coding in Fog Learning
Educational Activities
The following educational activities have been undertaken as part of the integrated research/education plan of the project:
- I taught ECE 301: Signals and Systems at Purdue for the first time in fall 2022. In a few lectures, I talked about network data as signals and machine learning algorithms as systems processing this data. An undergraduate student from this class is now engaged in for-credit independent work on fog learning.
- I taught ECE 60022: Wireless Communication Networks at Purdue for the first time in spring 2022. In this course, I included modules on the fundamentals of data-driven and machine learning-driven services delivered over wireless networks.
- I have been co-leading an Engineering Projects in Community Service (EPICS) team at Purdue each semester since fall 2021. This team is focused on providing data science solutions to Native American tribes in the Dakotas.
- We are involved in a Purdue University-wide initiative on developing analytical tools for improving engagement in first year online courses. We have tailored the federated learning solutions developed in this project to improve the quality of analytics provided to students from underrepresented groups.
Outreach Activities
We are hosting the Second International Workshop on Distributed Machine Learning and Fog Networks (FOGML) in conjunction with IEEE INFOCOM in May 2023.