Improving the classification accuracy of computer aided diagnosis through multimodality breast imaging
Abstract
The purpose of the present study is to evaluate the effect of using multiple modalities on the accuracy achieved by a computer-aided diagnosis system, designed for the detection of breast cancer. Towards this aim, 41 cases of breast cancer were selected, 18 of which were diagnosed as malignant and 23 as benign by an experienced physician. Each case included images acquired by means of two imaging modalities: x-ray and ultrasound. Manual segmentation was performed on every image in order to delineate and extract the regions of interest (ROIs) containing the breast tumors. Then 104 textural features were extracted; 52 from the x-ray images and 52 from the US images. A classification system was designed using the extracted features for every case. Firstly, features extracted from x-ray images alone were used to evaluate the system. The same task was performed for features extracted from US images alone. Lastly the combination of the two feature sets, mentioned afore, was used to evaluate the system. The proposed system that employed the Probabilistic Neural Network (PNN) classifier scored 78.05% in classification accuracy using only features from x-ray. While classification accuracy increased at 82.95% using only features from US, a significant increase in the system’s accuracy (95.12%) was achieved by using combined features from both x-ray and US. In order to minimize total training time, the proposed system adopted the Client-Server model to distribute processing tasks in a group of computers interconnected via a local area network. Depending on the number of clients employed, there was about a 4-fold reduction in training time employing 7 clients
Keywords
Image Analysis; Pattern Recognition; Multimodality; Breast Cancer; US X-RAY
DOI: 10.26265/e-jst.v5i2.637
Refbacks
- There are currently no refbacks.