Staff Publications

Staff Publications

  • external user (warningwarning)
  • Log in as
  • language uk
  • About

    'Staff publications' is the digital repository of Wageningen University & Research

    'Staff publications' contains references to publications authored by Wageningen University staff from 1976 onward.

    Publications authored by the staff of the Research Institutes are available from 1995 onwards.

    Full text documents are added when available. The database is updated daily and currently holds about 240,000 items, of which 72,000 in open access.

    We have a manual that explains all the features 

    Records 1 - 20 / 37

    • help
    • print

      Print search results

    • export

      Export search results

    Check title to add to marked list
    The effect of data augmentation and network simplification on the image-based detection of broccoli heads with Mask R-CNN
    Blok, Pieter M. ; Evert, Frits K. van; Tielen, Antonius P.M. ; Henten, Eldert J. van; Kootstra, Gert - \ 2020
    Journal of Field Robotics (2020). - ISSN 1556-4959
    agriculture - computer vision - learning - perception - sensors

    In current practice, broccoli heads are selectively harvested by hand. The goal of our work is to develop a robot that can selectively harvest broccoli heads, thereby reducing labor costs. An essential element of such a robot is an image-processing algorithm that can detect broccoli heads. In this study, we developed a deep learning algorithm for this purpose, using the Mask Region-based Convolutional Neural Network. To be applied on a robot, the algorithm must detect broccoli heads from any cultivar, meaning that it can generalize on the broccoli images. We hypothesized that our algorithm can be generalized through network simplification and data augmentation. We found that network simplification decreased the generalization performance, whereas data augmentation increased the generalization performance. In data augmentation, the geometric transformations (rotation, cropping, and scaling) led to a better image generalization than the photometric transformations (light, color, and texture). Furthermore, the algorithm was generalized on a broccoli cultivar when 5% of the training images were images of that cultivar. Our algorithm detected 229 of the 232 harvestable broccoli heads from three cultivars. We also tested our algorithm on an online broccoli data set, which our algorithm was not previously trained on. On this data set, our algorithm detected 175 of the 176 harvestable broccoli heads, proving that the algorithm was successfully generalized. Finally, we performed a cost-benefit analysis for a robot equipped with our algorithm. We concluded that the robot was more profitable than the human harvest and that our algorithm provided a sufficient basis for robot commercialization.

    Crash Course Machine Learning
    Kootstra, G.W. ; Dijk, A.D.J. van; Rapado Rincon, David ; Durairaj, J. - \ 2020
    Wageningen : Wageningen University & Research
    This one-day course includes various aspects of machine learning. Examples are decision trees, linear regression, neural networks, Brix prediction and casus regression.
    Robust node detection and tracking in fruit-vegetable crops using deep learning and multi-view imaging
    Boogaard, Frans P. ; Rongen, Kamiel S.A.H. ; Kootstra, Gert W. - \ 2020
    Biosystems Engineering 192 (2020). - ISSN 1537-5110 - p. 117 - 132.
    Deep learning - Digital plant phenotyping - Internode length - Multi-view imaging - Tracking node development

    Obtaining high-quality phenotypic data that can be used to study the relationship between genotype, phenotype and environment is still labour-intensive. Digital plant phenotyping can assist in collecting these data by replacing human vision by computer vision. However, for complex traits, such as plant architecture, robust and generic digital phenotyping methods have not yet been developed. This study focusses on internode length in cucumber plants. A method for estimating internode length and internode development over time is proposed. The proposed method firstly applies a robust node-detection algorithm based on a deep convolutional neural network. In tests, the algorithm had a precision of 0.95 and a recall of 0.92. The nodes are detected in images from multiple viewpoints around the plant in order to deal with the complex and cluttered plant environment and to solve the occlusion of nodes by other plant parts. The nodes detected in the multiple viewpoint images are then clustered using affinity propagation. The predicted clusters had a homogeneity of 0.98 and a completeness of 0.99. Finally, a linear function is fitted, which allows to study internode development over time. The presented method was able to measure internode length in cucumber plants with a higher accuracy and a larger temporal resolution than other methods proposed in literature and without the time investment needed to obtain the measurements manually. The relative error of our complete method was 5.8%. The proposed method provides many opportunities for robust phenotyping of fruit-vegetable crops grown under greenhouse conditions.

    Kan automatische beeldherkenning helpen bij betere vangstregistratie? : Kunstmatige intelligentie
    Nguyen, Linh ; Helmond, A.T.M. van; Kootstra, G.W. - \ 2019
    Visserijnieuws (2019). - ISSN 1380-5061Visserijnieuws.nl
    Automatic Detection of Tulip Breaking Virus (TBV) Using a Deep Convolutional Neural Network
    Polder, Gerrit ; Westeringh, Nick Van De ; Kool, Janne ; Khan, Haris Ahmad ; Kootstra, Gert ; Nieuwenhuizen, Ard - \ 2019
    IFAC-PapersOnLine 52 (2019)30. - ISSN 2405-8963 - p. 12 - 17.
    Tulip crop production in the Netherlands suffers from severe economic losses caused by virus diseases such as the Tulip Breaking Virus (TBV). Infected plants which can spread the disease by aphids must be removed from the field as soon as possible. As the availability of human experts for visual inspection in the field is limited, there is an urgent need for a rapid, automated and objective method of screening. From 2009-2012, we developed an automatic machine-vision-based system, using classical machine-learning algorithms. In 2012, the experiment conducted a tulip field planted at production density of 100 and 125 plants per square meter, resulting in images with overlapping plants. Experiments based on multispectral images resulted in scores that approached results obtained by experienced crop experts. The method, however, needed to be tuned specifically for each of the data trails, and a NIR band was needed for background segmentation. Recent developments in artificial intelligence and specifically in the area of convolutional neural networks, allow the development of more generic solutions for the detection of TBV. In this study, a Faster R-CNN network is applied on part of the data from the 2012 experiment. The outcomes show that the results are almost the same compared to the previous method using only RGB data.
    Improving data collection for rays
    Nguyen, Linh ; Helmond, Edwin van; Batsleer, Jurgen ; Poos, Jan Jaap ; Kootstra, Gert - \ 2019
    Plant-part segmentation using deep learning and multi-view vision
    Shi, Weinan ; Zedde, Rick van de; Jiang, Huanyu ; Kootstra, Gert - \ 2019
    Biosystems Engineering 187 (2019). - ISSN 1537-5110 - p. 81 - 95.
    2D images and 3D point clouds - digital plant phenotyping - instance segmentation - semantic segmentation

    To accelerate the understanding of the relationship between genotype and phenotype, plant scientists and plant breeders are looking for more advanced phenotyping systems that provide more detailed phenotypic information about plants. Most current systems provide information on the whole-plant level and not on the level of specific plant parts such as leaves, nodes and stems. Computer vision provides possibilities to extract information from plant parts from images. However, the segmentation of plant parts is a challenging problem, due to the inherent variation in appearance and shape of natural objects. In this paper, deep-learning methods are proposed to deal with this variation. Moreover, a multi-view approach is taken that allows the integration of information from the two-dimensional (2D) images into a three-dimensional (3D) point-cloud model of the plant. Specifically, a fully convolutional network (FCN) and a masked R-CNN (region-based convolutional neural network) were used for semantic and instance segmentation on the 2D images. The different viewpoints were then combined to segment the 3D point cloud. The performance of the 2D and multi-view approaches was evaluated on tomato seedling plants. Our results show that the integration of information in 3D outperforms the 2D approach, because errors in 2D are not persistent for the different viewpoints and can therefore be overcome in 3D.

    Inter-row Weed Detection of Sugar Beet Fields Using Aerial Imagery
    Mylonas, N. ; Pereira Valente, J.R. ; IJsselmuiden, J.M.M. ; Kootstra, G.W. - \ 2018
    Proceedings of the IEEE IROS workshop on Agricultural Robotics : learning from Industry 4.0 and moving into the future, September 28, 2017, Vancouver, Canada
    Kounalakis, Tsampikos ; Evert, F.K. van; Ball, David Michael ; Kootstra, G.W. ; Nalpantidis, Lazaros - \ 2017
    Wageningen : Wageningen University & Research - 39
    Visie op de verpakkingslijn van de toekomst
    Kootstra, G.W. - \ 2017
    Fully automated, fully flexible : A look inside the revolutionary PicknPack project, whick aims to develop new concepts for automated and flexible food
    Kootstra, G.W. - \ 2017
    AsiaFruit Magazine (2017).
    Creating Resilience in Pigs Through Artificial InteLligence (CuRly Pig Tail)
    Timmerman, M. ; Kluivers-Poodt, M. ; Reimert, I. ; Vermeer, H.M. ; Barth, R. ; Kootstra, G.W. ; Riel, J.W. van; Lokhorst, C. - \ 2017
    In: Proceedings of the 7th International Conference on the Assessment of Animal Welfare at Farm and Group level. - Wageningen : Wageningen Academic Publishers - ISBN 9789086863143 - p. 262 - 262.
    The pig provides a huge amount of health and welfare information by its behaviour and
    appearance (e.g. lying, eating, skin colour, eye colour, hair coat and tail-posture). By careful
    observation of this body language we believe it is possible to identify (early) signals of
    discomfort, upcoming disease and undesired behaviour. By early detection of these signals
    interventions can be carried out in an earlier stage than currently is done to restore health
    and welfare of the pig herd. Good health and welfare is the foundation of high resilience in
    animals, which makes them less vulnerable for disturbances (e.g. infections). For a farmer it
    is, however, impossible to continuously monitor the body language and behaviour of every
    pig on his or her farm. By using a combination of non-invasive techniques to collect signals
    from the pigs and their housing environment (e.g. a camera and a water meter) the pigs can
    be observed 24/7. By combining computer vision and pig knowledge using machine and deep
    learning techniques, a non-invasive monitoring system can be designed. Deep learning is
    the current state-of-the-art machine learning approach for computer vision that is especially
    powerful in recognising and localising image content, e.g. the location of the body parts or
    visible abnormalities thereof. Deep learning is based on large convolutional neural networks
    and require a large amount of manually annotated training (image) data. Ultimately with this
    approach the robustness of pig husbandry systems is increased due to better health and welfare
    conditions for the animals. Additionally, our approach could even lead to a new design of pig
    housing systems. Furthermore, it increases the job satisfaction of the farmer. Our ambition is to
    develop advanced monitoring systems that allow to stop tail docking all together, so the curly
    pig tail becomes once again a common phenomenon on pig farms. To achieve this ambition,
    we will explore, co-develop and test non-invasive monitoring technologies for pig husbandry
    D7.5 Report of prototype test under laboratory (and industrial) conditions. Fresh and processed food production line
    Barth, R. ; Tuijl, B.A.J. van; Kootstra, G.W. ; Østergaard, Søren ; Perez, Jose ; Dudbridge, Mike ; Saeys, Wouter ; Wouters, Niels ; Meng, Zhaozhong ; Pekkeriet, E.J. - \ 2016
    EU - 93 p.
    PICKNPACK: Report on the performance of the QAS module : Detecting quality deficient products and shape and position assessment on irregular products
    Roy, Jeroen van; Wouters, Niels ; Kootstra, G.W. ; Zhaozong, Meng ; Wu, Zhipeng ; Van Dael, Mattias ; Verboven, Pieter ; Saeys, Wouter - \ 2016
    European Commission - 27 p.
    Wageningen brengt innovatieve verpakkingslijn
    Kootstra, G.W. - \ 2016
    Pick'n Pack-projectteam ontwikkelt flexibele verpakkingslijn : Robotisering heeft de toekomst
    Kootstra, G.W. - \ 2016
    Verpakkingsmanagement (2016).
    Agro Food Robotica: uitdagingen en oplossingen
    Kootstra, Gert - \ 2016
    Robotic Quality Assessment of fruit and vegetables
    Kootstra, Gert - \ 2016
    PicknPack, een flexibele automatiseringslijn voor de foodindustrie
    Kootstra, Gert - \ 2016
    Electronics Letters (Journal)
    Kootstra, Gert - \ 2016
    Electronics Letters (2016). - ISSN 0013-5194
    Check title to add to marked list
    << previous | next >>

    Show 20 50 100 records per page

     
    Please log in to use this service. Login as Wageningen University & Research user or guest user in upper right hand corner of this page.