Federated learning (FL), which allows machine learning models to be trained across decentralized devices or servers without exchanging raw data, holds the potential for enhancing military applications, such as dispersed sensor networks, autonomous swarms, and other agile response teams. The objective of this project is to establish the theory and system foundation to address security challenges in FL in resource-constrained and sporadically connected military networks, particularly in the face of Byzantine attacks when deployed in adversarial environments.
To mitigate Byzantine attacks in mission-critical FL systems, the project adopts a defense-in-depth strategy by constructing multiple layers of protection across different abstraction levels in the learning pipeline. The research activities span four synergistic components:
The first component takes a systems approach to derive a principled method to assess the trustworthiness of information from FL nodes, seeking to create new security primitives that will allow contributing nodes to demonstrate cryptographically that their submitted data complies with specific security policies.
The second component takes a data-centric approach, focusing on the design of robust and efficient learning algorithms that minimize the impact of adversarial inputs, thereby ensuring reliable model performance even in the presence of Byzantine behavior.
The third component addresses the most challenging scenario, in which connectivity between participants and the FL server is unreliable or unavailable. It explores decentralized strategies that enable agents to make critical learning or inference decisions collectively under such constraints.
Finally, the fourth component conducts a thorough analysis of the system resource requirements and performance impacts associated with different defense strategies. It will develop optimization frameworks to enable the adaptive and efficient integration of security mechanisms, tailored to the system’s dynamic network and computational conditions.