1.

論文

論文
Tokui, N. ; Nakayama, Kenji ; Hirano, Akihiro
出版情報: ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings.  6  pp.349-352,  2003-01-01.  IEEE(Institute of Electrical and Electronics Engineers)
URL: http://hdl.handle.net/2297/6812
概要: In order to achieve fast convergence and less computation for adaptive filters, a joint method combining a whitening process and the NLMS algorithm is a hopeful approach. However, updating the filter coefficients is not synchronized with the reflection coefficient updating resulting in unstable behavior. We analyzed effects of this, and proposed the Synchronized Learning Algorithm to solve this problem. Asynchronous error between them is removed, and fast convergence and small residual error were obtained. This algorithm, however, requires O(ML) computations, where M is an adaptive filter length, and L is a lattice predictor length. It is still large compared with the NLMS algorithm. In order to achieve less computation while the fast convergence is maintained, a block implementation method is proposed. The reflection coefficients are updated at some period, and are fixed during this interval. The proposed block implementation can be effectively applied to parallel form adaptive filters, such as sub-band adaptive filters. Simulation using speech signal shows that a learning curve of the proposed block implementation a little slower than the our original algorithm, but can save the computational complexity. 続きを見る
2.

論文

論文
Nakayama, Kenji ; Hirano, Akihiro ; Nishiwaki, Takayuki
出版情報: Proceedings of the International Joint Conference on Neural Networks.  3  pp.1856-1861,  2003-07-01.  IEEE(Institute of Electrical and Electronics Engineers)
URL: http://hdl.handle.net/2297/6855
概要: A network structure and its learning algorithm have been proposed for blind source separation applied to nonlinear mixtures. The network has a cascade form consists of a source separation block and a linearization block in this order. The conventional learning algorithm is employed for the separation block. A new learning algorithm is proposed for the linearization block assuming 2nd-order nonlinearity. After, source separation, the outputs include the nonlinear components for the same signal source. This nonlinearity is suppressed through the linearization block. Parameters in this block are iteratively adjusted based on a process of solving a 2nd-order equation of a single variable. Simulation results, using 2-channel speech signals and an instantaneous nonlinear mixing process, show good separation performance. 続きを見る
3.

論文

論文
Nakayama, Kenji ; Hirano, Akihiro ; Fusakawa, M.
出版情報: Proceedings of the International Joint Conference on Neural Networks.  3  pp.1704-1709,  2001-07-01.  IEEE(Institute of Electrical and Electronics Engineers)
URL: http://hdl.handle.net/2297/6829
概要: In multilayer neural networks, network size reduction and fast convergence are important. For this purpose, trainable activation functions and nonlinear synapses have been proposed. When high-order polynomials are used for nonlinearity, the number of terms in the polynomial becomes a very large for a high-dimensional input. It causes very complicated networks and slow convergence. In this paper, a method to select the useful terms in the polynomial in a learning process is proposed. This method is based on the genetic algorithm (GA), and combines the internal information, magnitude of connection weihgts, to select the gene in the next generation. A mechanism of pruning the terms is inherently included. Many examples demonstrate usefulness of the proposed method compared with the ordinary GA method. Convergence is stable and the number of the selected terms is well reduced. 続きを見る
4.

論文

論文
Hara, Kazuyuki ; Nakayama, Kenji
出版情報: Proceedings of the International Joint Conference on Neural Networks.  pp.III-543-III-548,  2000-07-01.  IEEE(Institute of Electrical and Electronics Engineers)
URL: http://hdl.handle.net/2297/6823
概要: A training data selection method for multi-class data is proposed. This method can be used for multilayer neural networks (MLNN). The MLNN can be applied to pattern classification, signal process, and other problems that can be considered as the classification problem. The proposed data selection algorithm selects the important data to achieve a good classification performance. However, the training using the selected data converges slowly, so we also propose an acceleration method. The proposed training method adds the randomly selected data to the boundary data. The validity of the proposed methods is confirmed through the computer simulation. 続きを見る
5.

論文

論文
Nishiwaki, Takayuki ; Nakayama, Kenji ; Hirano, Akihiro
出版情報: ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings.  5  pp.V_569-V_572,  2004-05-01.  IEEE(Institute of Electrical and Electronics Engineers)
URL: http://hdl.handle.net/2297/6857
概要: A network structure and its learning algorithm have been proposed for blind source separation applied to nonlinear mixtures. Nonlinearity is expressed by low-order polynomials, which are acceptable in many practical applications. A separation block and a linearization block are cascaded. In the separation block, the cross terms are suppressed, and the signal sources are separated in each group, which include its high-order components. The high-order components are further suppressed through the linearization block. A learning algorithm minimizing the mutual information is applied to the separation block. A new learning algorithm is proposed for the linearization block. Simulation results, using 2-channel speech signals, instantaneous mixtures, and 2nd-order post nonlinear functions, show good separation performance. 続きを見る
6.

論文

論文
Nakayama, Kenji ; Hirano, Akihiro ; Kourin, Makoto
出版情報: Proceedings of the International Joint Conference on Neural Networks.  3  pp.1681-1686,  2001-07-01.  IEEE(Institute of Electrical and Electronics Engineers)
URL: http://hdl.handle.net/2297/6826
概要: In this paper, a synthesis and learning method for the neural network with embedded gate units and a multi-dimensional input is proposed. When the input is multi-dimensional, gate functions are controlled in a multi-dimensional space. In this case, a hypersurface, on which the gate function is formed should be optimized. Furthermore, the switching points should be considered on the unit input. An initialization and a control methods for gate functions, which optimize the hypersurface, the switching point and the inclination, are proposed. The stabilization methods, already proposed, are further modified to be applied to the multi-dimensional environment. The gate functions can be trained together with the connection weights. Discontinuous function approximation is demonstrated to confirm usefulness of the proposed method. 続きを見る