高级检索
    吴欢欢, 谢瑞麟, 乔塬心, 陈翔, 崔展齐. 基于可解释性分析的深度神经网络优化方法[J]. 计算机研究与发展, 2024, 61(1): 209-220. DOI: 10.7544/issn1000-1239.202220803
    引用本文: 吴欢欢, 谢瑞麟, 乔塬心, 陈翔, 崔展齐. 基于可解释性分析的深度神经网络优化方法[J]. 计算机研究与发展, 2024, 61(1): 209-220. DOI: 10.7544/issn1000-1239.202220803
    Wu Huanhuan, Xie Ruilin, Qiao Yuanxin, Chen Xiang, Cui Zhanqi. Optimizing Deep Neural Network Based on Interpretability Analysis[J]. Journal of Computer Research and Development, 2024, 61(1): 209-220. DOI: 10.7544/issn1000-1239.202220803
    Citation: Wu Huanhuan, Xie Ruilin, Qiao Yuanxin, Chen Xiang, Cui Zhanqi. Optimizing Deep Neural Network Based on Interpretability Analysis[J]. Journal of Computer Research and Development, 2024, 61(1): 209-220. DOI: 10.7544/issn1000-1239.202220803

    基于可解释性分析的深度神经网络优化方法

    Optimizing Deep Neural Network Based on Interpretability Analysis

    • 摘要: 近年来,深度神经网络(deep neural network,DNN)在众多领域取得了广泛应用,甚至在安全关键系统中已经可以代替人类作出决策,如自动驾驶和智慧医疗等,这对DNN的可靠性提出了更高的要求. 由于DNN具有复杂的多层非线性网络结构,难以理解其内部预测机制,也很难对其进行调试. 现有的DNN调试工作主要通过修改DNN的参数和扩增数据集提升DNN性能,以达到优化的目的. 然而直接调整参数难以控制修改的幅度,甚至使模型失去对训练数据的拟合能力;而无指导地扩增训练数据则会大幅增加训练成本. 针对此问题,提出了一种基于可解释性分析的DNN优化方法(optimizing DNN based on interpretability analysis,OptDIA). 对DNN的训练过程及决策行为进行解释分析,根据解释分析结果,将原始数据中对DNN决策行为产生不同程度影响的部分以不同概率进行数据变换以生成新训练数据,并重训练DNN,以提升模型性能达到优化DNN的目的. 在使用3个数据集训练的9个DNN模型上的实验结果表明,OptDIA可以将DNN的准确率提升0.39~2.15个百分点,F1-score提升0.11~2.03个百分点.

       

      Abstract: In recent years, deep neural networks (DNN) have been widely used in many fields, even replacing human to make decisions in some safety-critical systems, such as autonomous driving and smart healthcare, which requires higher reliability of DNN. It is difficult to understand the internal prediction mechanism and debug because of the complex multi-layer nonlinear network structure of DNN. The existing DNN debugging work mainly improves the performance by adjusting the parameters and augments the training set to optimize DNN. However, it is difficult to control the modification range of adjusting parameters directly, and probably make the model lose the ability of fitting the training set. And unguided augmentation of training set will dramatically increase training costs. To address this problem, a DNN optimization method OptDIA (optimizing DNN based on interpretability analysis) is proposed. Interpretability analysis is conducted on the training process and the decision-making behavior of DNN. According to the interpretability analysis results, the original training data is split into different partitions to evaluate their influences on decision-making results of DNN. After that, the partitions of original training data are transformed with different probabilities to generate new training data, which are used to retrain DNN to improve the performance of the model. The experiments on nine DNNs trained by three datasets shows that OptDIA can improve the accuracy of DNNs by 0.39% to 2.15% and F1-score of DNNs by 0.11% to 2.03%.

       

    /

    返回文章
    返回