1.

論文

論文
Hara, Kazuyuki ; Nakayama, Kenji
出版情報: IEICE Trans. Fundamentals.  E81-A  pp.374-381,  1998-03-01. 
URL: http://hdl.handle.net/2297/5654
概要: 金沢大学大学院自然科学研究科知能情報・数理<br />A training data selection method is proposed for multilayer neural networks (MLNNs). This met hod selects a small number of the training data, which guarantee both generalization and fast training of the MLNNs applied to pattern classification. The generalization will be satisfied using the data locate close to the boundary of the pattern classes. However, if these data are only used in the training, convergence is slow. This phenomenon is analyzed in this paper. Therefore, in the proposed method, the MLNN is first trained using some number of the data, which are randomly selected (Step 1). The data, for which the output error is relatively large, are selected. Furthermore, they are paired with the nearest data belong to the different class. The newly selected data are further paired with the nearest data. Finally, pairs of the data, which locate close to the boundary, can be found. Using these pairs of the data, the MLNNs are further trained (Step 2). Since, there are some variations to combine Steps 1 and 2, the proposed method can be applied to both off-line and on-line training. The proposed method can reduce the number of the training data, at the same time, can hasten the training. Usefulness is confirmed through computer simulation. 続きを見る
2.

論文

論文
Hara, Kazuyuki ; Nakayama, Kenji
出版情報: IEEE International Conference on Neural Networks - Conference Proceedings.  1  pp.436-441,  1996-06-01.  IEEE(Institute of Electrical and Electronics Engineers)
URL: http://hdl.handle.net/2297/6788
概要: 金沢大学理工研究域電子情報学系<br />A training data reduction method for a multilayer neural network (MLNN) is proposed in this paper. This method reduce the data by selecting the minimum number of training data that guarantee generality of the MLNN. For this purpose, two methods are used. One of them is a pairing method which selects the training data by finding the nearest data of the different classes. Data along the class boundary in data space can be selected. The other method is a training method, which used a semi-optimum MLNN in a training process. Since the MLNN classify data based on the distance from the network boundary, the selected data can locate close to the class boundary. So, if the semi-optimum MLNN did not select data from class boundary, pairing method can select them. The proposed methods can be applied to both off-line training and on-line training. The proposed method is also investigated through computer simulation. 続きを見る
3.

論文

論文
Hara, Kazuyuki ; Nakayama, Kenji
出版情報: IEEE International Conference on Neural Networks - Conference Proceedings.  1  pp.600-605,  1995-11-01.  IEEE(Institute of Electrical and Electronics Engineers)
URL: http://hdl.handle.net/2297/6787
概要: Signal classification performance using multilayer neural network (MLNN) and the conventional signal processing methods are theoretically compared under the limited observation period and computational load. The signals with N samples are classified based on frequency components. The comparison is carried out based on degree of freedom the signal detection regions in an N-dimensional signal space. As a result, the MLNN has higher degree of freedom, and can provide more flexible performance for classifying the signals than the conventional methods. This analysis is further investigated throught computer simulations. Multi-frequency signals and the real application, a dial tone receiver, are taken into account. As a result, the MLNN can provide much higher accuracy than the conventional signal processing methods. 続きを見る
4.

論文

論文
Hara, Kazuyuki ; Nakayama, Kenji
出版情報: IEEE&INNS Proc. IJCNN'93, Nagoya.  1  pp.601-604,  1993-10-01.  IEEE(Institute of Electrical and Electronics Engineers)
URL: http://hdl.handle.net/2297/6811
概要: Frequency analysis capability of multilayer neural networks, trained by back-propagation (BP) algorithm is investigated. Multi-frequency signal classification is taken into account for this purpose. The number of frequency sets, that is signal groups, is 2approx.5, and the number of frequencies included in a signal group is 3approx.5. The frequencies are alternately located among the signal groups. Through computer simulation, it has been confirmed that the neural network has very high resolution. Classification rates are about 99.5% for training signals, and 99.0% for untraining signals. The results are compared with conventional methods, including Euclidean distance with accuracy of about 65%, Fourier transform with accuracy of about 10approx.30%, and using very high-Q filters with a huge number of computations. The neural network requires only the same number of inner products as the hidden units. Frequency sensitivity and robustness for the random noise are studied. The networks show high frequency sensitivity, namely, the networks have high frequency resolution. Random noise are added to the multi-frequency signals to investigate how does the network cancel uncorrelated noise among the signals. By increasing the number of samples, or training signals, effects of random noise can be cancelled. 続きを見る
5.

論文

論文
Hara, Kazuyuki ; Nakayama, Kenji
出版情報: IEEE International Conference on Neural Networks - Conference Proceedings.  1993-03-01.  IEEE(Institute of Electrical and Electronics Engineers)
URL: http://hdl.handle.net/2297/6785
6.

論文

論文
Hara, Kazuyuki ; Amakata, Yoshihisa ; Nukaga, Ryohei ; Nakayama, Kenji
出版情報: IEEE&INNS, Proc. IJCNN'2001, Washington DC.  3  pp.2036-2041,  2001-07-01.  IEEE(Institute of Electrical and Electronics Engineers)
URL: http://hdl.handle.net/2297/6842
概要: In this paper, we propose a learning method that updates a synaptic weight in probability which is proportional to an output error. Proposed method can reduce computational complexity of learning and at the same time, it can improve the classification ability. We point out that an example produces small output error does not contribute to update of a synaptic weight. As learning progresses, the number of the small error examples will be increasing compared to the big one is decreasing. This unbalance will cause of difficulty of learning large error examples. Proposed method cancels this phenomenon and improve the learning ability. Validity of proposed method is confirmed through computer simulation. 続きを見る
7.

論文

論文
Hara, Kazuyuki ; Nakayama, Kenji
出版情報: Proceedings of the International Joint Conference on Neural Networks.  pp.III-543-III-548,  2000-07-01.  IEEE(Institute of Electrical and Electronics Engineers)
URL: http://hdl.handle.net/2297/6823
概要: A training data selection method for multi-class data is proposed. This method can be used for multilayer neural networks (MLNN). The MLNN can be applied to pattern classification, signal process, and other problems that can be considered as the classification problem. The proposed data selection algorithm selects the important data to achieve a good classification performance. However, the training using the selected data converges slowly, so we also propose an acceleration method. The proposed training method adds the randomly selected data to the boundary data. The validity of the proposed methods is confirmed through the computer simulation. 続きを見る
8.

論文

論文
Hara, Kazuyuki ; Nakayama, Kenji
出版情報: IEEE International Conference on Neural Networks - Conference Proceedings.  34  pp.2247-2252,  1998-05-01.  Institute of Electrical and Electronics Engineers (IEEE)
URL: http://hdl.handle.net/2297/6887
概要: 金沢大学大学院自然科学研究科情報システム<br />In this paper, a training data selection method for multilayer neural networks (MLNNs) in on-l ine training is proposed. Purpose of the reduction in training data is reducing the computation complexity of the training and saving the memory to store the data without losing generalization performance. This method uses a pairing method, which selects the nearest neighbor data by finding the nearest data in the different classes. The network is trained by the selected data. Since the selected data located along data class boundary, the trained network can guarantee generalization performance. Efficiency of this method for the on-line training is evaluated by computer simulation. 続きを見る
9.

論文

論文
Hara, Kazuyuki ; Nakayama, Kenji
出版情報: IEEE International Conference on Neural Networks - Conference Proceedings.  5  pp.2997-3002,  1994-01-01.  Institute of Electrical and Electronics Engineers (IEEE)
URL: http://hdl.handle.net/2297/11893
概要: This paper discusses properties of activation functions in multilayer neural network applied to pattern classification. A rule of thumb for selecting activation functions or their combination is proposed. The sigmoid, Gaussian and sinusoidal functions are selected due to their independent and fundamental space division properties. The sigmoid function is not effective for a single hidden unit. On the contrary, the other functions can provide good performance. When several hidden units are employed, the sigmoid function is useful. However, the convergence speed is still slower than the others. The Gaussian function is sensitive to the additive noise, while the others are rather insensitive. As a result, based on convergence rates, the minimum error and noise sensitivity, the sinusoidal function is most useful for both without and with additive noise. Property of each function is discussed based on the internal representation, that is the distributions of the hidden unit inputs and outputs. Although this selection depends on the input signals to be classified, the periodic function can be effectively applied to a wide range of application fields. 続きを見る
10.

論文

論文
Hara, Kazuyuki ; Nakayama, Kenji
出版情報: Proceeding of International Conference on Artificial Neural Networks, ICANN'94, Sorrento, Italy.  pp.819-822,  1994-05-01.  Springer-Verlag / ICANN'94
URL: http://hdl.handle.net/2297/18389
概要: 金沢大学理工研究域 電子情報学系