Full Length Research Paper
Abstract
The quality of the nasopharyngeal carcinoma (NPC) treatment plans evaluation using three types of artificial neural networks (ANNs) are instructed by three different training algorithms. Three ANNs including Elman (ANN-E), feed-forward (ANN-FF), and pattern recognition (ANN-PR) were trained by using three different models, that is, leave-one-out (Train-loo), random selection (Train-random), and user defined (Train-user) method. One hundred sets of NPC treatment plans were collected as the input data of the neural networks. The conformal index (CI) and homogeneity index (HI) were used as the characteristic values and also to train the neurons. Four grades (A, B, C, and D) were classified in degrading order. The over-training issue is considered between the train data and the number of neurons. The receiver operating characteristic (ROC) curves were obtained to evaluate the performed accuracies. The optimal numbers of neurons for ANN-E, ANN-FF, and ANN-PR, in the loo method are 6, 24, and 9; in the random-selection method, they are 26, 22, and 4; and in the user-defined method they are 12, 8, and 11 neurons, respectively. The optimal size of train data is 92% of total inputs in the cases of ANN-E and ANN-FF and 76% in the case of ANN-PR. The networks with higher accuracy are ANN-PR-loo (93.65 ± 3.60%), ANN-FF-loo (88.05 ± 5.84%), and ANN-E-loo (87.55 ± 5.86%), respectively. The networks with shorter training time are ANN-PR-random (0.55 ± 0.11 s), ANN-PR-user (0.59 ± 0.08 s), and ANN-PR-user (1.07 ± 0.16 s), respectively. The ROC curves show that the ANN-PR-loo approach has the highest sensitivity, which is 99%. ANN-PR-loo reduces the amount of trail-and-error during the iterative process of generating inverse treatment plans. It is concluded that the ANN-PR-loo is an excellent model among the three for classifying the quality of treatment plans for NPC.
Key words: Artificial neural networks (ANNs), dose-volume histogram (DVH), intelligence system, nasopharyngeal carcinoma (NPC).
Copyright © 2023 Author(s) retain the copyright of this article.
This article is published under the terms of the Creative Commons Attribution License 4.0