求解非凸稀疏优化问题全局最优解的迭代阈值算法的收敛性

2023.04.21

投稿:龚惠英部门:浏览次数:

活动信息

报告题目 (Title):On Convergence of Iterative Thresholding Algorithms to Global Solution for Nonconvex Sparse Optimization(求解非凸稀疏优化问题全局最优解的迭代阈值算法的收敛性)

报告人 (Speaker): 胡耀华 教授(深圳大学)

报告时间 (Time):2023年4月21日(周五) 13:00

报告地点 (Place):校本部F309

邀请人(Inviter):余长君 教授

主办部门:理学院数学系

报告摘要:

Sparse optimization is a popular research topic in applied mathematics and optimization, and nonconvex sparse regularization problems have been extensively studied to ameliorate the statistical bias and enjoy robust sparsity promotion capability in vast applications. However, puzzled by the nonconvex and nonsmooth structure in nonconvex regularization problems, the convergence theory of their optimization algorithms is still far from completion: only the convergence to a stationary point was established in the literature, while there is still no theoretical evidence to guarantee the convergence to a global minimum or a true sparse solution.

This talk aims to find an approximate global solution or true sparse solution of an under-determined linear system. For this purpose, we propose two types of iterative thresholding algorithms with the continuation technique and the truncation technique respectively. We introduce a notion of limited shrinkage thresholding operator and apply it, together with the restricted isometry property, to show that the proposed algorithms converge to an approximate global solution or true sparse solution within a tolerance relevant to the noise level and the limited shrinkage magnitude. Applying the obtained results to nonconvex regularization problems with SCAD, MCP and Lp penalty and utilizing the recovery bound theory, we establish the convergence of their proximal gradient algorithms to an approximate global solution of nonconvex regularization problems.