Vol. 28 (2019):
Abstracts and Contents of Papers
Machine GRAPHICS & VISION, Vol. 28 (2019), No. 1/4
- Kurek J., Antoniuk I., G鏎ski J., Jegorowa A., 安iderski B., Kruk M., Wieczorek G., Pach J., Or這wski A., Aleksiejuk-Gawron J.:
Data Augmentation Techniques for Transfer Learning Improvement in Drill Wear Classification Using Convolutional Neural Network
MGV vol. 28, no. 1/4, 2019, pp. 3-12.
In this paper we introduce the enhanced drill wear recognition method, based on classifiers ensemble, obtained using transfer learning and data augmentation methods. Red, green and yellow classes are used to describe the current drill state. The first one corresponds to the case when drill should be immediately replaced. The second one denotes a tool that is still in a good condition. The final class refers to the case when a drill is suspected of being worn out, and a human expert evaluation would be required. The proposed algorithm uses three different, pretrained network models and adjusts them to the drill wear classification problem. To ensure satisfactory results, each of the methods used was required to achieve accuracy above 90% for the given classification task. Final evaluation is achieved by voting of all three classifiers. Since the initial data set was small (242 instances), the data augmentation method was used to artificially increase the total number of drill hole images. The experiments performed confirmed that the presented approach can achieve high accuracy, even with such a limited set of training data.
classifiers ensemble, convolutional neural networks, data augmentation, deep learning, tool condition monitoring.
Kurek J., Antoniuk I., G鏎ski J., Jegorowa A., 安iderski B., Kruk M., Wieczorek G., Pach J., Or這wski A., Aleksiejuk-Gawron J.:
Classifiers Ensemble of Transfer Learning for Improved Drill Wear Classification Using Convolutional Neural Network
MGV vol. 28, no. 1/4, 2019, pp. 13-23.
This paper presents an improved method for recognizing the drill state on the basis of hole images drilled in a laminated chipboard, using convolutional neural network (CNN) and data augmentation techniques. Three classes were used to describe the drill state: red – for drill that is worn out and should be replaced, yellow – for state in which the system should send a warning to the operator, indicating that this element should be checked manually, and green – denoting the drill that is still in good condition, which allows for further use in the production process. The presented method combines the advantages of transfer learning and data augmentation methods to improve the accuracy of the received evaluations. In contrast to the classical deep learning methods, transfer learning requires much smaller training data sets to achieve acceptable results. At the same time, data augmentation customized for drill wear recognition makes it possible to expand the original dataset and to improve the overall accuracy. The experiments performed have confirmed the suitability of the presented approach to accurate class recognition in the given problem, even while using a small original dataset.
convolutional neural networks, data augmentation, deep learning, tool condition monitoring.
Wieczorek G., Antoniuk I., Kurek J., 安iderski B., Kruk M., Pach J., Or這wski A.:
BCT Boost Segmentation with U-net in TensorFlow
MGV vol. 28, no. 1/4, 2019, pp. 25-34.
In this paper we present a new segmentation method meant for boost area that remains after removing the tumour using BCT (breast conserving therapy). The selected area is a region on which radiation treatment will later be made. Consequently, an inaccurate designation of this region can result in a treatment missing its target or focusing on healthy breast tissue that otherwise could be spared. Needless to say that exact indication of boost area is an extremely important aspect of the entire medical procedure, where a better definition can lead to optimizing of the coverage of the target volume and, in result, can save normal breast tissue. Precise definition of this area has a potential to both improve the local control of the disease and to ensure better cosmetic outcome for the patient. In our approach we use U-net along with Keras and TensorFlow systems to tailor a precise solution for the indication of the boost area. During the training process we utilize a set of CT images, where each of them came with a contour assigned by an expert. We wanted to achieve a segmentation result as close to given contour as possible. With a rather small initial data set we used data augmentation techniques to increase the number of training examples, while the final outcomes were evaluated according to their similarity to the ones produced by experts, by calculating the mean square error and the structural similarity index (SSIM).
breast cancer, breast conserving therapy, image segmentation, U-net, Keras, TensorFlow.
Pach J., Chmielewski L. J., Or這wski A., Kruk M., Kurek J., 安iderski B., Antoniuk I., Wieczorek G., 妃ieta雟ka K., G鏎ski J.:
Textural Features Based on Run Length Encoding in the Classification of Furniture Surfaces with the Orange Skin Defect
MGV vol. 28, no. 1/4, 2019, pp. 35-45.
Textural features based upon thresholding and run length encoding have been successfully applied to the problem of classification of the quality of lacquered surfaces in furniture exhibiting the surface defect known as orange skin. The set of features for one surface patch consists of 12 real numbers. The classifier used was the one nearest neighbour classifier without feature selection. The classification quality was tested on 808 images 300 by 300 pixels, made under controlled, close-to-tangential lighting, with three classes: good, acceptable and bad, in close to balanced numbers. The classification accuracy was not smaller than 98% when the tested surface was not rotated with respect to the training samples, 97% for rotations up to 20 degrees and 95.5% in the worst case for arbitrary rotations.
quality inspection, furniture surface, orange skin, textural features, run length coding, thresholded image, one nearest neighbour, leave-one-out testing.
Talacha K., 安iderski B., Kurek J., Kruk M., P馧torak A., Chmielewski L. J., Wieczorek G., Antoniuk I., Pach J., Or這wski A.:
Context-Based Segmentation of the Longissimus Muscle in Beef with a Deep Neural Network
MGV vol. 28, no. 1/4, 2019, pp. 47-57.
The problem of segmenting the cross-section through the longissimus muscle in beef carcasses with computer vision methods was investigated. The available data were 111 images of cross-sections coming from 28 cows (typically four images per cow). Training data were the pixels of the muscles, marked manually. The AlexNet deep convolutional neural network was used as the classifier, and single pixels were the classified objects. Each pixel was presented to the network together with its small circular neighbourhood, and with its context represented by the further neighbourhood, darkened by halving the image intensity. The average classification accuracy was 96%. The accuracy without darkening the context was found to be smaller, with a small but statistically significant difference. The segmentation of the longissimus muscle is the introductory stage for the next steps of assessing the quality of beef for the alimentary purposes.
beef carcasses, context-based, segmentation, longissimus muscle, classification, deep convolutional network, beef quality.
Bator M., 妃ieta雟ka K.:
Constraint-based Algorithm to Estimate the Line of a Milling Edge
MGV vol. 28, no. 1/4, 2019, pp. 59-67.
Each practical task has its constrains. They limit the number of potential solutions. Incorporation of the constraints into the structure of an algorithm makes it possible to speed up computations by reducing the search space and excluding the wrong results. However, such an algorithm needs to be designed for one task only, has a limited usefulness to tasks which have the same set of constrains. Therefore, sometimes is limited to just a single application for which it has been designed, and is difficult to generalise. An algorithm to estimate the straight line representing a milling edge is presented. The algorithm was designed for the measurement purposes and meets the requirements related to precision.
constraint-based algorithm, line, milling, measurement.
Bator M., Pankiewicz M.:
Image Annotating Tools for Agricultural Purpose – a Requirements Study
MGV vol. 28, no. 1/4, 2019, pp. 69-77.
Images of natural scenes, like those relevant for agriculture, are characterised with a variety of forms of objects of interest and similarities between objects that one might want to discriminate. This introduces uncertainty to the analysis of such images. Requirements for an image annotation tool to be used in pattern recognition design for agriculture were discussed. A selection of open source annotating tools were presented. Advices how to use the software to handle uncertainty and missing functionalities were described.
image annotation, agriculture, uncertainty.
Last updated January 31, 2020