[1] |
Pu, S., Olshevsky, A. and Paschalidis, I. (2020) Asymptotic Network Independence in Dis- tributed Stochastic Optimization for Machine Learning: Examining Distributed and Central- ized Stochastic Gradient Descent. IEEE Signal Processing Magazine, 37, 114-122. https://doi.org/10.1109/MSP.2020.2975212 |
[2] |
Nedic, A. (2020) Distributed Gradient Methods for Convex Machine Learning Problems in Networks: Distributed Optimization. IEEE Signal Processing Magazine, 37, 92-101. https://doi.org/10.1109/MSP.2020.2975210 |
[3] |
王龙, 卢开红, 关永强. 分布式优化的多智能体方法[J]. 控制理论与应用, 2019, 36(11): 1820- 1833. |
[4] |
Lu¨, Q., Liao, X., Li, H., et al. (2020) Achieving Acceleration for Distributed Economic Dis- patch in Smart Grids over Directed Networks. IEEE Transactions on Network Science and Engineering, 7, 1988-1999. https://doi.org/10.1109/TNSE.2020.2965999 |
[5] |
Nedic, A. and Ozdaglar, A. (2009) Distributed Subgradient Methods for Multi-Agent Opti- mization. IEEE Transactions on Automatic Control, 54, 48-61. https://doi.org/10.1109/TAC.2008.2009515 |
[6] |
Nedic, A., Olshevsky, A. and Shi, W. (2017) Achieving Geometric Convergence for Distributed Optimization over Time-Varying Graphs. SIAM Journal on Optimization, 27, 2597-2633. https://doi.org/10.1137/16M1084316 |
[7] |
Gao, J., Liu, X., Dai, Y., et al. (2022) Achieving Geometric Convergence for Distributed Optimization with Barzilai-Borwein Step Sizes. Science China Information Sciences, 65, 149- 204. |
[8] |
Ghaderyan, D., Aybat, N., Aguiar, A., et al. (2023) A Fast Row-Stochastic Decentralized Method for Distributed Optimization over Directed Graphs. IEEE Transactions on Automatic Control, 69, 275-289. https://doi.org/10.1109/TAC.2023.3275927 |
[9] |
Lian, X., Zhang, C., Zhang, H., et al. (2017) Can Decentralized Algorithms Outperform Cen- tralized Algorithms? A Case Study for Decentralized Parallel Stochastic Gradient Descent. Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, CA, 4-9 December 2017, 5336-5346. |
[10] |
Lee, S. and Zavlanos, M.M. (2017) Approximate Projection Methods for Decentralized Op- timization with Functional Constraints. IEEE Transactions on Automatic Control, 63, 3248- 3260. https://doi.org/10.1109/TAC.2017.2778696 |
[11] |
Qu, G. and Li, N. (2017) Harnessing Smoothness to Accelerate Distributed Optimization. IEEE Transactions on Control of Network Systems, 5, 1245-1260. https://doi.org/10.1109/TCNS.2017.2698261 |
[12] |
Di Lorenzo, P. and Scutari, G. (2016) Next: In-Network Nonconvex Optimization. IEEE Transactions on Signal and Information Processing over Networks, 2, 120-136. https://doi.org/10.1109/TSIPN.2016.2524588 |
[13] |
Pu, S. and Nedic, A. (2021) Distributed Stochastic Gradient Tracking Methods. Mathematical Programming, 187, 409-457. https://doi.org/10.1007/s10107-020-01487-0 |
[14] |
Defazio, A., Bach, F. and Lacoste-Julien, S. (2014) SAGA: A Fast Incremental Gradient Method with Support for Non-Strongly Convex Composite Objectives. Proceedings of the 27th International Conference on Neural Information Processing Systems, 1, 1646-1654. |
[15] |
Tan, C., Ma, S., Dai, Y., et al. (2016) Barzilai-Borwein Step Size for Stochastic Gradient Descent. Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, 5-10 December 2016, Barcelona. |
[16] |
Nguyen, L., Liu, J., Scheinberg, K., et al. (2017) SARAH: A Novel Method for Machine Learning Problems Using Stochastic Recursive Gradient. International Conference on Machine Learning, 70, 2613-2621. |
[17] |
Xin, R., Khan, U. and Kar, S. (2020) Variance-Reduced Decentralized Stochastic Optimization with Accelerated Convergence. IEEE Transactions on Signal Processing, 68, 6255-6271. https://doi.org/10.1109/TSP.2020.3031071 |
[18] |
Kempe, D., Dobra, A. and Gehrke, J. (2003) Gossip-Based Computation of Aggregate Infor- mation. 44th Annual IEEE Symposium on Foundations of Computer Science, Cambridge, MA, 11-14 October 2003, 482-491. |
[19] |
Nedic, A. and Olshevsky, A. (2016) Stochastic Gradient-Push for Strongly Convex Functions on Time-Varying Directed Graphs. IEEE Transactions on Automatic Control, 61, 3936-3947. https://doi.org/10.1109/TAC.2016.2529285 |
[20] |
Spiridonoff, A., Olshevsky, A. and Paschalidis, I.C. (2020) Robust Asynchronous Stochastic Gradient-Push: Asymptotically Optimal and Network-Independent Performance for Strongly Convex Functions. Journal of Machine Learning Research, 21, 58. |
[21] |
Qureshi, M.I., et al. (2020) S-ADDOPT: Decentralized Stochastic First-Order Optimization over Directed Graphs. IEEE Control Systems Letters, 5, 953-958. https://doi.org/10.1109/LCSYS.2020.3006420 |
[22] |
Xin, R., Sahu, A., Khan, U., et al. (2019) Distributed Stochastic Optimization with Gradient Tracking over Strongly-Connected Networks. 2019 IEEE 58th Conference on Decision and Control (CDC), Nice, 11-13 December 2019, 8353-8358. |
[23] |
Qureshi, M., Xin, R., Kar, S., et al. (2021) Push-SAGA: A Decentralized Stochastic Algorithm with Variance Reduction over Directed Graphs. IEEE Control Systems Letters, 6, 1202-1207. https://doi.org/10.1109/LCSYS.2021.3090652 |
[24] |
Qureshi, M., Xin, R., Kar, S., et al. (2022) Variance Reduced Stochastic Optimization over Directed Graphs with Row and Column Stochastic Weights. arXiv Preprint arXiv:2202.03346 |
[25] |
Qureshi, M. and Khan, U.A. (2022) Stochastic First-Order Methods over Distributed Data. 2022 IEEE 12th Sensor Array and Multichannel Signal Processing Workshop (SAM), Trond- heim, 20-23 June 2022, 405-409. https://doi.org/10.1109/SAM53842.2022.9827792 |
[26] |
Hu, J., Chen, G., Li, H., et al. (2022) Push-LSVRG-UP: Distributed Stochastic Optimization over Unbalanced Directed Networks with Uncoordinated Triggered Probabilities. IEEE Trans- actions on Network Science and Engineering, 10, 934-950. https://doi.org/10.1109/TNSE. 2022.3225229 |
[27] |
Kovalev, D., Horv´ath, S. and Richt´arik, P. (2020) Don’t Jump Through Hoops and Remove Those Loops: SVRG and Katyusha Are Better Without The Outer Loop. Proceedings of the 31st International Conference on Algorithmic Learning Theory, 117, 451-467. |
[28] |
Sun, B., Hu, J., Xia, D., et al. (2021) A Distributed Stochastic Optimization Algorithm with Gradient-Tracking and Distributed Heavy-Ball Acceleration. Frontiers of Information Tech- nology & Electronic Engineering, 22, 1463-1476. https://doi.org/10.1631/FITEE.2000615 |
[29] |
Hu, J., Xia, D., Cheng, H., et al. (2022) A Decentralized Nesterov Gradient Method for Stochastic Optimization over Unbalanced Directed Networks. Asian Journal of Control, 24, 576-593. https://doi.org/10.1002/asjc.2483 |