English 清华大学 旧版入口 人才招聘

论坛讲座

【系综合学术报告】2025年第14期 || An Augmented Lagrangian Method for Training Recurrent Neural Networks

报告题目An Augmented Lagrangian Method for Training Recurrent Neural Networks

报告人:张超教授北京交通大学

时间:6月3日(周二)下午4:00-5:00

地点:理科楼A304

摘要Recurrent Neural Networks (RNNs) are widely used to model sequential data in a wide range of areas, such as natural language processing, speech recognition, machine translation, and time series analysis. In this paper, we model the training process of RNNs with the ReLU activation function as a constrained optimization problem with a smooth nonconvex objective function and piecewise smooth nonconvex constraints. We prove that any feasible point of the optimization problem satisfies the no nonzero abnormal multiplier constraint qualification (NNAMCQ), and any local minimizer is a Karush-Kuhn-Tucker (KKT) point of the problem. Moreover, we propose an augmented Lagrangian method (ALM) and design an efficient block coordinate descent (BCD) method to solve the subproblems of the ALM.The update of each block of the BCD method has a closed-form solution.The stop criterion for the inner loop is easy to check and can be stopped in finite steps. Moreover, we show that the BCD method can generate a directional stationary point of the subproblem. Furthermore, we establish the global convergence of the ALM to a KKT point of the constrained optimization problem. Compared with the state-of-the-art algorithms, numerical results demonstrate the efficiency and effectiveness of the ALM for training RNNs (Collaboration with Yue Wang and Xiaojun Chen)

邀请人:张立平