Hierarchical Multi-Agent Reinforcement Learning for Dynamic Inventory Allocation with Demand Uncertainty

Authors

  • Yiming Zhao Department of Industrial and Systems Engineering, North Carolina State University, USA Author
  • Christopher Hayes Department of Industrial and Systems Engineering, North Carolina State University, USA Author

DOI:

https://doi.org/10.71465/mrcis153

Keywords:

hierarchical reinforcement learning, multi-agent systems, inventory allocation, demand uncertainty, supply chain management, decentralized control

Abstract

The complexity of modern supply chain networks requires sophisticated approaches to inventory management that can effectively handle demand uncertainty and coordinate decisions across multiple organizational levels. This paper proposes a novel hierarchical multi-agent reinforcement learning framework for dynamic inventory allocation in multi-echelon supply chains facing stochastic demand patterns. The hierarchical architecture decomposes the inventory control problem into strategic and operational decision layers, where high-level agents coordinate allocation policies across distribution networks while low-level agents optimize local replenishment decisions. The framework integrates Centralized Training with Decentralized Execution paradigm, enabling autonomous agents to learn coordinated policies through shared experience while maintaining operational independence during deployment. Experimental results demonstrate that the proposed approach achieves significant reductions in total system costs compared to traditional base-stock policies and single-agent reinforcement learning methods, while effectively mitigating the bullwhip effect in supply chains with high demand variability.

Downloads

Published

2025-12-05