1.

論文

論文
Nakayama, Kenji ; Kimura, Yoshinori ; Katayama, Hiroshi
出版情報: Proceedings of the International Joint Conference on Neural Networks.  2  pp.1247-1250,  1993-10-01.  IEEE(Institute of Electrical and Electronics Engineers)
URL: http://hdl.handle.net/2297/6789
概要: In this paper, quantization level increase in human face images using a multilayer neural network (NN) is investigated. Basically speaking, it is impossible to increase quality without any other information. However, when images are limited to some category, image restoration could be possible, based on the common properties in this category. The multilayer NN is trained using human face images of 32テ・2 pixels with 8-levels as the input data, and 256-level images as the targets. The standard back-propagation (BP) algorithm is employed. 20, 40 and 100 training data are examined. By increasing the training data, a general function of regenerating missing information can be achieved. The internal structure of the trained NN is analyzed using some special input images. As a result, it has been confirmed that the NN regards the input image as the human face, and extracts features of the face. The input image is transformed using these features and the common properties of the training data, extracted and held on the connection weights, to the human face image. 続きを見る
2.

論文

論文
Nakayama, Kenji ; Hirano, Akihiro ; Katoh, Shinya ; Yamamoto, Tadashi ; Nakanishi, Kenichi ; Sawada, Manabu
出版情報: Proceedings of the International Joint Conference on Neural Networks.  2  pp.1373-1378,  2002-05-01.  IEEE(Institute of Electrical and Electronics Engineers)
URL: http://hdl.handle.net/2297/6806
概要: In training neural networks, it is important to reduce input variables for saving memory, reducing network size, and ach ieving fast training. This paper proposes two kinds of selecting methods for useful input variables. One of them is to use information of connection weights after training. If a sum of absolute value of the connection weights related to the input node is large, then this input variable is selected. In some case, only positive connection weights are taken into account. The other method is based on correlation coefficients among the input variables. If a time series of the input variable can be obtained by amplifying and shifting that of another input variable, then the former can be absorbed in the latter. These analysis methods are applied to predicting cutting error caused by thermal expansion and compression in machine tools. The input variables are reduced from 32 points to 16 points, while maintaining good prediction within 6 ホシm, which can be applicable to real machine tools. 続きを見る
3.

論文

論文
Nakayama, Kenji ; Hirano, Akihiro ; Fusakawa, M.
出版情報: Proceedings of the International Joint Conference on Neural Networks.  3  pp.1704-1709,  2001-07-01.  IEEE(Institute of Electrical and Electronics Engineers)
URL: http://hdl.handle.net/2297/6829
概要: In multilayer neural networks, network size reduction and fast convergence are important. For this purpose, trainable activation functions and nonlinear synapses have been proposed. When high-order polynomials are used for nonlinearity, the number of terms in the polynomial becomes a very large for a high-dimensional input. It causes very complicated networks and slow convergence. In this paper, a method to select the useful terms in the polynomial in a learning process is proposed. This method is based on the genetic algorithm (GA), and combines the internal information, magnitude of connection weihgts, to select the gene in the next generation. A mechanism of pruning the terms is inherently included. Many examples demonstrate usefulness of the proposed method compared with the ordinary GA method. Convergence is stable and the number of the selected terms is well reduced. 続きを見る
4.

論文

論文
Khalaf, Ashraf A.M. ; Nakayama, Kenji
出版情報: IEICE transactions on fundamentals of electronics, communications and computer sciences.  E81-A  pp.364-373,  1998-03-01. 
URL: http://hdl.handle.net/2297/5646
概要: 金沢大学大学院自然科学研究科情報システム<br />Time series prediction is very important technology in a wide variety of fields. The actual ti me series contains both linear and nonlinear properties. The amplitude of the time series to be predicted is usually continuous value. For these reasons, we combine nonlinear and linear predictors in a cascade form. The nonlinear prediction problem is reduced to a pattern classification. A set of the past samples x(n - 1), . . . , x(n - N) is transformed into the output, which is the prediction of the next coming sample x(n). So, we employ a multi-layer neural network with a sigmoidal hidden layer and a single linear output neuron for the nonlinear prediction. It is called a Nonlinear Sub-Predictor (NSP). The NSP is trained by the supervised learning algorithm using the sample x(n) as a target. However, it is rather difficult to generate the continuous amplitude and to predict linear property. So, we employ a linear predictor after the NSP. An FIR filter is used for this purpose, which is called a Linear Sub-Predictor (LSP). The LSP is trained by the supervised learning algorithm using also i(n) as a target. In order to estimate the minimum size of the proposed predictor, we analyze the nonlinearity of the time series of interest. The prediction is equal to mapping a set of past samples to the next coming sample. The multi-layer neural network is good for this kind of pattern mapping. Still, difficult mappings may exist when several sets of very similar patterns are mapped onto very different samples. The degree of difficulty of the mapping is closely related to the nonlinearity. The necessary number of the past samples used for prediction is determined by this nonlinearity. The difficult mapping requires a large number of the past samples. Computer simulations using the sunspot data and the artificially generated discrete amplitude data have demonstrated the efficiency of the proposed predictor and the nonlinearity analysis. 続きを見る
5.

論文

論文
Nakayama, Kenji ; Hirano, Akihiro ; Kanbe, Aki
出版情報: Proceedings of the International Joint Conference on Neural Networks.  3  pp.III-253-III-258,  2000-07-01.  IEEE(Institute of Electrical and Electronics Engineers)
URL: http://hdl.handle.net/2297/6825
概要: Many problems solved by multilayer neural networks (MLNNs) are reduced into pattern mapping. If the mapping includes several different rules, it is difficult to solve these problems by using a single MLNN with linear connection weights and continuous activation functions. In this paper, a structure trainable neural network has been proposed. The gate units are embedded, which can be trained together with the connection weights. Pattern mapping problems, which include several different mapping rules, can be realized using a single new network. Since, some parts of the network can be commonly used for different mapping rules, the network size can be reduced compared with the modular neural networks, which consists of several independent expert networks. 続きを見る
6.

論文

論文
Khalaf, Ashraf A.M. ; Nakayama, Kenji
出版情報: IEEE International Conference on Neural Networks - Conference Proceedings.  3  pp.1975-1980,  1998-05-01.  IEEE(Institute of Electrical and Electronics Engineers)
URL: http://hdl.handle.net/2297/6783
概要: Time series prediction is a very important technology in a wide variety of field. The actual time series contains both linear and nonlinear properties. The amplitude of the time series to be predicted is usually continuous value. For this reason, we combine nonlinear and linear predictors in a cascade form. In order to estimate the minimum size of the proposed predictor, we propose a nonlinearity analysis for the time series of interest. Computer simulations using the sunspot data have demonstrated the efficiency of the proposed predictor and the nonlinearity analysis. 続きを見る
7.

論文

論文
Hara, Kazuyuki ; Nakayama, Kenji
出版情報: Proceedings of the International Joint Conference on Neural Networks.  pp.III-543-III-548,  2000-07-01.  IEEE(Institute of Electrical and Electronics Engineers)
URL: http://hdl.handle.net/2297/6823
概要: A training data selection method for multi-class data is proposed. This method can be used for multilayer neural networks (MLNN). The MLNN can be applied to pattern classification, signal process, and other problems that can be considered as the classification problem. The proposed data selection algorithm selects the important data to achieve a good classification performance. However, the training using the selected data converges slowly, so we also propose an acceleration method. The proposed training method adds the randomly selected data to the boundary data. The validity of the proposed methods is confirmed through the computer simulation. 続きを見る
8.

論文

論文
Nakayama, Kenji ; Hirano, Akihiro ; Fukumura, K.
出版情報: IEEE International Conference on Neural Networks - Conference Proceedings.  2  pp.1209-1213,  2004-07-01.  IEEE(Institute of Electrical and Electronics Engineers)
URL: http://hdl.handle.net/2297/6794
概要: A learning process of a single neural network (SNN) to improve prediction accuracy of protein secondary structure is optimized. The protein secondary structures are predicted using a multiple alignment of amino acid as the input data. A multi-modal neural network (MNN) has been proposed to improve the precision of prediction. This method uses five independent neural networks, and the final decision is made by averaging all outputs of five SNNs. In the proposed method, the same prediction accuracy can be achieved by using only a single NN and optimizing a learning process. In a learning process of protein structure prediction, over learning is easily occurred. So, the learning process is optimized so as to avoid the over learning. For this purpose, small learning rates, adding small random noise to the input data, and updating the connection weights by the average in some group are useful. The prediction accuracy 58% obtained by using the conventional SNN is improved to 66%, which is the same accuracy of the MNN, which needs five SNNs. 続きを見る
9.

論文

論文
Nakayama, Kenji ; Hirano, Akihiro ; Ido, Issei
出版情報: Proceedings of the International Joint Conference on Neural Networks.  3  pp.1657-1661,  1999-07-01.  IEEE(Institute of Electrical and Electronics Engineers)
URL: http://hdl.handle.net/2297/6838
概要: Network size of neural networks is highly dependent on activation functions. A trainable activation function has been proposed, which consists of a linear combination of some basic functions. The activation functions and the connection weights are simultaneously trained. An 8 bit parity problem can be solved by using a single output unit and no hidden unit. In this paper, we expand this model to multilayer neural networks. Furthermore, nonlinear functions are used at the unit inputs in order to realize more flexible transfer functions. The previous activation functions and the new nonlinear functions are also simultaneously trained. More complex pattern classification problems can be solved with a small number of units and fast convergence. 続きを見る