1.

論文

論文
Jansen, Boris ; Nakayama, Kenji
出版情報: IEICE transactions on fundamentals of electronics, communications and computer sciences.  E89-A  pp.2140-2148,  2006-08-01. 
URL: http://hdl.handle.net/2297/5647
概要: 金沢大学大学院自然科学研究科情報システム<br />Over the years, many improvements and refinements to the backpropagation learning algorithm ha ve been reported. In this paper, a new adaptive penalty-based learning extension for the backpropagation learning algorithm and its variants is proposed. The new method initially puts pressure on artificial neural networks in order to get all outputs for all training patterns into the correct half of the output range, instead of mainly focusing on minimizing the difference between the target and actual output values. The upper bound of the penalty values is also controlled. The technique is easy to implement and computationally inexpensive. In this study, the new approach is applied to the backpropagation learning algorithm as well as the RPROP learning algorithm. The superiority of the new proposed method is demonstrated though many simulations. By applying the extension, the percentage of successful runs can be greatly increased and the average number of epochs to convergence can be well reduced on various problem instances. The behavior of the penalty values during training is also analyzed and their active role within the learning process is confirmed. Copyright © 2006 The Institute of Electronics, Information and Communication Engineers. 続きを見る
2.

論文

論文
Jansen, Boris ; Nakayama, Kenji
出版情報: IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences.  E89-A  pp.2140-2148,  2006-08-01.  Oxford University Press / 電子情報通信学会 = IEICE
URL: http://hdl.handle.net/2297/18073
概要: 金沢大学理工研究域 電子情報学系<br />Over the years, many improvements and refinements to the backpropagation learning algorithm have b een reported. In this paper, a new adaptive penalty-based learning extension for the backpropagation learning algorithm and its variants is proposed. The new method initially puts pressure on artificial neural networks in order to get all outputs for all training patterns into the correct half of the output range, instead of mainly focusing on minimizing the difference between the target and actual output values. The upper bound of the penalty values is also controlled. The technique is easy to implement and computationally inexpensive. In this study, the new approach is applied to the backpropagation learning algorithm as well as the RPROP learning algorithm. The superiority of the new proposed method is demonstrated though many simulations. By applying the extension, the percentage of successful runs can be greatly increased and the average number of epochs to convergence can be well reduced on various problem instances. The behavior of the penalty values during training is also analyzed and their active role within the learning process is confirmed. Copyright © 2006 The Institute of Electronics, Information and Communication Engineers. 続きを見る