1.

論文

論文
Nakayama, Kenji ; Imai, Kunihiko
出版情報: IEEE International Conference on Neural Networks - Conference Proceedings.  6  pp.3909-3914,  1994-06-01.  IEEE(Institute of Electrical and Electronics Engineers)
URL: http://hdl.handle.net/2297/6836
概要: 金沢大学理工研究域電子情報学系<br />A neural demodulator is proposed for amplitude shift keying (ASK) signal. It has several important features compared with conventional linear methods. First, necessary functions for ASK demodulation, including wide-band noise rejection, pule waveform shaping, and decoding, can be embodied in a single neural network. This means these functions are not separately designed but unified in a learning and organizing process. Second, these functions can be self-organized through the learning. Supervised learning algorithms, such as the backpropagation algorithm, can be applied for this purpose. Finally, both wide-band noise rejection and a very sharp waveform response can be simultaneously achieved. It is very difficult to be done by linear filtering. Computer simulation demonstrates efficiency of the proposed method. 続きを見る
2.

論文

論文
Miyoshi, Seiji ; Nakayama, Kenji
出版情報: IEEE International Conference on Neural Networks - Conference Proceedings.  3  pp.1913-1918,  1997-06-01.  IEEE(Institute of Electrical and Electronics Engineers)
URL: http://hdl.handle.net/2297/6848
概要: In this paper, the geometric learning algorithm (GLA) is proposed for an elementary perceptron which includes a single output neuron. The GLA is a modified version of the affine projection algorithm (APA) for adaptive filters. The weights update vector is determined geometrically towards the intersection of the k hyperplanes which are perpendicular to patterns to be classified. k is the order of the GLA. In the case of the APA, the target of the coefficients update is a single point which corresponds to the best identification of the unknown system. On the other hand, in the case of the GLA, the target of the weights update is an area, in which all the given patterns are classified correctly. Thus, their convergence conditions are different. In this paper, the convergence condition of the 1st order GLA for 2 patterns is theoretically derived. The new concept `the angle of the solution area' is introduced. The computer simulation results support that this new concept is a good estimation of the convergence properties. 続きを見る
3.

論文

論文
Nakayama, Kenji ; Hirano, Akihiro ; Kanbe, Aki
出版情報: Proceedings of the International Joint Conference on Neural Networks.  3  pp.III-253-III-258,  2000-07-01.  IEEE(Institute of Electrical and Electronics Engineers)
URL: http://hdl.handle.net/2297/6825
概要: Many problems solved by multilayer neural networks (MLNNs) are reduced into pattern mapping. If the mapping includes several different rules, it is difficult to solve these problems by using a single MLNN with linear connection weights and continuous activation functions. In this paper, a structure trainable neural network has been proposed. The gate units are embedded, which can be trained together with the connection weights. Pattern mapping problems, which include several different mapping rules, can be realized using a single new network. Since, some parts of the network can be commonly used for different mapping rules, the network size can be reduced compared with the modular neural networks, which consists of several independent expert networks. 続きを見る
4.

論文

論文
Hara, Kazuyuki ; Nakayama, Kenji
出版情報: Proceedings of the International Joint Conference on Neural Networks.  pp.III-543-III-548,  2000-07-01.  IEEE(Institute of Electrical and Electronics Engineers)
URL: http://hdl.handle.net/2297/6823
概要: A training data selection method for multi-class data is proposed. This method can be used for multilayer neural networks (MLNN). The MLNN can be applied to pattern classification, signal process, and other problems that can be considered as the classification problem. The proposed data selection algorithm selects the important data to achieve a good classification performance. However, the training using the selected data converges slowly, so we also propose an acceleration method. The proposed training method adds the randomly selected data to the boundary data. The validity of the proposed methods is confirmed through the computer simulation. 続きを見る
5.

論文

論文
Nakayama, Kenji ; Hirano, Akihiro ; Ido, Issei
出版情報: Proceedings of the International Joint Conference on Neural Networks.  3  pp.1657-1661,  1999-07-01.  IEEE(Institute of Electrical and Electronics Engineers)
URL: http://hdl.handle.net/2297/6838
概要: Network size of neural networks is highly dependent on activation functions. A trainable activation function has been proposed, which consists of a linear combination of some basic functions. The activation functions and the connection weights are simultaneously trained. An 8 bit parity problem can be solved by using a single output unit and no hidden unit. In this paper, we expand this model to multilayer neural networks. Furthermore, nonlinear functions are used at the unit inputs in order to realize more flexible transfer functions. The previous activation functions and the new nonlinear functions are also simultaneously trained. More complex pattern classification problems can be solved with a small number of units and fast convergence. 続きを見る