Scalable Parallel Static Learning
编号:45
访问权限:仅限参会人
更新:2021-08-19 20:58:17
浏览:552次
口头报告
摘要
Static learning is a learning algorithm for finding additional implicit implications between gates in a netlist. In automatic test pattern generation (ATPG) the learned implications help recognize conflicts and redundancies early, and thus greatly improve the performance of ATPG. Though ATPG can further benefit from multiple runs of incremental or dynamic learning, it is only feasible when the learning process is fast enough. In the paper, we study speeding up static learning through parallelization on heterogeneous computing platform, which includes multi-core microprocessors (CPUs), and graphics processing units (GPUs). We discuss the advantages and limitations in each of these architectures. With their specific features in mind, we propose two different parallelization strategies that are tailored to multi-core CPUs and GPUs. Speedup and performance scalability of the two proposed parallel algorithms are analyzed. As far as we know, this is the first time that parallel static learning is studied in the literature.
Speaker: Xiaoze Lin
Short bio: Xiaoze Lin received the B.S degree in Communication Engineering from Shantou University, Shantou, China in 2019. He is currently working toward the M.S degree in Electronics and Communications Engineering with the Department of Electronics, Shantou University, Shantou, China. His current research interests include very large scale integration design and test, and fault tolerant computing.
关键词
static learning;parallel acceleration;GPU;multi-core CPU
发表评论