1.

論文

論文
Miyoshi, Seiji ; Nakayama, Kenji
出版情報: IEEE International Conference on Neural Networks - Conference Proceedings.  3  pp.1913-1918,  1997-06-01.  IEEE(Institute of Electrical and Electronics Engineers)
URL: http://hdl.handle.net/2297/6848
概要: In this paper, the geometric learning algorithm (GLA) is proposed for an elementary perceptron which includes a single output neuron. The GLA is a modified version of the affine projection algorithm (APA) for adaptive filters. The weights update vector is determined geometrically towards the intersection of the k hyperplanes which are perpendicular to patterns to be classified. k is the order of the GLA. In the case of the APA, the target of the coefficients update is a single point which corresponds to the best identification of the unknown system. On the other hand, in the case of the GLA, the target of the weights update is an area, in which all the given patterns are classified correctly. Thus, their convergence conditions are different. In this paper, the convergence condition of the 1st order GLA for 2 patterns is theoretically derived. The new concept `the angle of the solution area' is introduced. The computer simulation results support that this new concept is a good estimation of the convergence properties. 続きを見る
2.

論文

論文
Xu, Q. ; Nakayama, Kenji
出版情報: IEEE International Conference on Neural Networks - Conference Proceedings.  3  pp.1954-1959,  1997-06-01.  IEEE(Institute of Electrical and Electronics Engineers)
URL: http://hdl.handle.net/2297/6813
概要: This paper investigates some possible problems of Cascade Correlation algorithm, one of which is the zigzag output mapping caused by weight-illgrowth of the adding hidden unit. Without doubt, it could lead to deteriorate the generalization, especially for regression problems. To solve this problem, we combine Cascade Correlation algorithm with regularization theory. In addition, some new regularization terms are proposed in light of special cascade structure. Simulation has shown that regularization indeed smooth the zigzag out-put, so that the generalization is improved, especially for functional approximation. 続きを見る
3.

論文

論文
Tokui, N. ; Nakayama, Kenji ; Hirano, Akihiro
出版情報: ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings.  6  pp.349-352,  2003-01-01.  IEEE(Institute of Electrical and Electronics Engineers)
URL: http://hdl.handle.net/2297/6812
概要: In order to achieve fast convergence and less computation for adaptive filters, a joint method combining a whitening process and the NLMS algorithm is a hopeful approach. However, updating the filter coefficients is not synchronized with the reflection coefficient updating resulting in unstable behavior. We analyzed effects of this, and proposed the Synchronized Learning Algorithm to solve this problem. Asynchronous error between them is removed, and fast convergence and small residual error were obtained. This algorithm, however, requires O(ML) computations, where M is an adaptive filter length, and L is a lattice predictor length. It is still large compared with the NLMS algorithm. In order to achieve less computation while the fast convergence is maintained, a block implementation method is proposed. The reflection coefficients are updated at some period, and are fixed during this interval. The proposed block implementation can be effectively applied to parallel form adaptive filters, such as sub-band adaptive filters. Simulation using speech signal shows that a learning curve of the proposed block implementation a little slower than the our original algorithm, but can save the computational complexity. 続きを見る
4.

論文

論文
Nakayama, Kenji ; Hirano, Akihiro ; Fusakawa, M.
出版情報: Proceedings of the International Joint Conference on Neural Networks.  3  pp.1704-1709,  2001-07-01.  IEEE(Institute of Electrical and Electronics Engineers)
URL: http://hdl.handle.net/2297/6829
概要: In multilayer neural networks, network size reduction and fast convergence are important. For this purpose, trainable activation functions and nonlinear synapses have been proposed. When high-order polynomials are used for nonlinearity, the number of terms in the polynomial becomes a very large for a high-dimensional input. It causes very complicated networks and slow convergence. In this paper, a method to select the useful terms in the polynomial in a learning process is proposed. This method is based on the genetic algorithm (GA), and combines the internal information, magnitude of connection weihgts, to select the gene in the next generation. A mechanism of pruning the terms is inherently included. Many examples demonstrate usefulness of the proposed method compared with the ordinary GA method. Convergence is stable and the number of the selected terms is well reduced. 続きを見る
5.

論文

論文
Hara, Kazuyuki ; Nakayama, Kenji
出版情報: Proceedings of the International Joint Conference on Neural Networks.  pp.III-543-III-548,  2000-07-01.  IEEE(Institute of Electrical and Electronics Engineers)
URL: http://hdl.handle.net/2297/6823
概要: A training data selection method for multi-class data is proposed. This method can be used for multilayer neural networks (MLNN). The MLNN can be applied to pattern classification, signal process, and other problems that can be considered as the classification problem. The proposed data selection algorithm selects the important data to achieve a good classification performance. However, the training using the selected data converges slowly, so we also propose an acceleration method. The proposed training method adds the randomly selected data to the boundary data. The validity of the proposed methods is confirmed through the computer simulation. 続きを見る
6.

論文

論文
Nakayama, Kenji ; Hirano, Akihiro ; Ido, Issei
出版情報: Proceedings of the International Joint Conference on Neural Networks.  3  pp.1657-1661,  1999-07-01.  IEEE(Institute of Electrical and Electronics Engineers)
URL: http://hdl.handle.net/2297/6838
概要: Network size of neural networks is highly dependent on activation functions. A trainable activation function has been proposed, which consists of a linear combination of some basic functions. The activation functions and the connection weights are simultaneously trained. An 8 bit parity problem can be solved by using a single output unit and no hidden unit. In this paper, we expand this model to multilayer neural networks. Furthermore, nonlinear functions are used at the unit inputs in order to realize more flexible transfer functions. The previous activation functions and the new nonlinear functions are also simultaneously trained. More complex pattern classification problems can be solved with a small number of units and fast convergence. 続きを見る
7.

論文

論文
Nakayama, Kenji ; Hirano, Akihiro ; Kourin, Makoto
出版情報: Proceedings of the International Joint Conference on Neural Networks.  3  pp.1681-1686,  2001-07-01.  IEEE(Institute of Electrical and Electronics Engineers)
URL: http://hdl.handle.net/2297/6826
概要: In this paper, a synthesis and learning method for the neural network with embedded gate units and a multi-dimensional input is proposed. When the input is multi-dimensional, gate functions are controlled in a multi-dimensional space. In this case, a hypersurface, on which the gate function is formed should be optimized. Furthermore, the switching points should be considered on the unit input. An initialization and a control methods for gate functions, which optimize the hypersurface, the switching point and the inclination, are proposed. The stabilization methods, already proposed, are further modified to be applied to the multi-dimensional environment. The gate functions can be trained together with the connection weights. Discontinuous function approximation is demonstrated to confirm usefulness of the proposed method. 続きを見る