Staff Publications

Staff Publications

  • external user (warningwarning)
  • Log in as
  • language uk
  • About

    'Staff publications' is the digital repository of Wageningen University & Research

    'Staff publications' contains references to publications authored by Wageningen University staff from 1976 onward.

    Publications authored by the staff of the Research Institutes are available from 1995 onwards.

    Full text documents are added when available. The database is updated daily and currently holds about 240,000 items, of which 72,000 in open access.

    We have a manual that explains all the features 

    Records 1 - 20 / 361

    • help
    • print

      Print search results

    • export
      A maximum of 250 titles can be exported. Please, refine your queryYou can also select and export up to 30 titles via your marked list.
    Check title to add to marked list
    Vrees voor fosfortekort bij koeien is niet nodig
    Dijkstra, Jan - \ 2019
    Zaaien vanggewas vraagt maatwerk
    Verhoeven, John - \ 2019
    Advanced classification of volunteer potato in a sugar beet field
    Suh, Hyun K. - \ 2018
    Wageningen University. Promotor(en): E.J. van Henten, co-promotor(en): J.W. Hofstee; J.M.M. IJsselmuiden. - Wageningen : Wageningen University - ISBN 9789463437912 - 190

    Volunteer potato is a major problem in sugar beet production in the Netherlands, and adequate control of volunteer potato is critical. This is stressed by a statutory obligation in the Netherlands under which farmers have to remove volunteer potato plants from their fields before the 1st of July in the growing season every year, to a maximum level of two remaining plants per square meter.

    In 2011, the EU SmartBot project, a cross-border collaboration project which involved 24 different partners from Germany and the Netherlands, was initiated to develop a robotic system for several applications including for agricultural use. In AgroBot, part of the SmartBot project, a small-sized and vision-based autonomous weed control system was to be developed for effective control of volunteer potato plants in a sugar beet field. As a robotic platform, the Clearpath Husky A200 UGV (Unmanned Ground Vehicle) was to be used in this project. Due to the reduced carrying capacity of the robotic platform (Husky), additional infrastructure like a hood was not a viable option. Moreover, artificial lighting was not considered feasible either because the mobile platform was battery operated. Thus, the system should be able to perform robustly in scenes that are fully exposed to ambient lighting conditions.

    Within the EU SmartBot project, the primary objective of this research was identified as:

    to develop a computer vision procedure that detects volunteer potato plants under ambient light conditions in a sugar beet field

    For a complete weed control pipeline, including weed detection and weed removal, the following requirements were set. The automatic weeding system should:

    • effectively control more than 95% of the volunteer potato;

    • ensure less than 5% of undesired control of sugar beet plants;

    • ensure a classification time of less than 1 second per field image for real-time operation in the field.

    It was indicated that due to the potential non-perfect performance of actual weed removal, classification accuracy should be considerably higher than 95%. The steps required to fulfil the above-mentioned objective form the main line of this thesis including vegetation segmentation (Chapter 2 and 3) and sugar beet/volunteer potato classification (Chapter 4 and 5).

    Chapter 2 addressed the research question: “Does a ground shadow detection and removal enhance the performance of vegetation segmentation under natural illumination conditions in the field?”

    In Chapter 2, an algorithm was described and evaluated for ground shadow detection and removal based on colour space conversion and a multilevel threshold. The advantage of using the proposed algorithm was assessed for vegetation segmentation with field images that were acquired by a High Dynamic Range (HDR) camera under natural illumination. Compared with no shadow removal, applying shadow removal enhanced the performance of vegetation segmentation under natural illumination conditions in the field with an average of 20%, 4.4% and 13.5% in precision, specificity and modified accuracy, respectively, and did not reduce segmentation performance when shadows were not present. The average processing time was 0.46 s, which is feasible when real-time application in the field is considered.

    Chapter 3 addressed the research question: “Do different combinations of colour index and threshold technique result in different segmentation performance when evaluated on field images? Given the varying conditions in the field, is it better to use one specific combination at all times or the combination should be adapted to the field conditions at hand for best segmentation performance?”

    In Chapter 3, the performance of 40 combinations of eight colour indices and five threshold techniques for vegetation segmentation were evaluated. A clear difference in performance, represented in terms of MA (Modified Accuracy), was observed among various combinations under the given conditions of this research. CIVE+Kapur showed the best performance, while VEG+Kapur showed the worst on the dataset. When adapting the combination to the given conditions yielded a slightly higher performance than when using a single combination for all (in this case CIVE+Kapur). Consistent results were obtained when validated on a different independent image dataset. The expected advantage of adapting the combination to the field condition is not large because it seems that for practical use, the slight improvement when adapting the combination to the field conditions does not outweigh the investment in sensor technology and software needed to accurately determine the different conditions in the field.

    Chapter 4 and 5 focussed on classification and addressed the following research questions: “Does an algorithm using a Bag-of-Visual-Words (BoVW) model and SIFT or SURF descriptors meet the requirements set for the classification of volunteer potato and sugar beet under natural and varying daylight conditions? If the BoVW model does not meet the requirements, does a deep learning approach, particularly transfer learning based on Convolutional Neural Network (ConvNet, or CNN) provide an effective and better performance to meet the requirements with limited amount of dataset? Are the processing times (or calculation times) fast enough for real-time application?”

    For the classification of sugar beet and volunteer potato under ambient varying daylight conditions, Chapter 4 proposed a classification algorithm using a Bag-of-Visual-Words (BoVW) model based on SIFT or SURF features as well as crop row information in the form of the Out-of-Row Regional Index (ORRI). The highest classification accuracy of 96.5% with false-negative of 0% obtained using SIFT and ORRI with SVM is considerably better than previously reported approaches for weed classification; however, the false-positive rate of 7% deviates from the requirements since misclassification should be less than 5%. The average classification time of 0.10-0.11 s met the real-time requirements. Adding location information (ORRI) improved overall classification accuracy significantly. The proposed approach proved its potential under varying natural light conditions.

    Since the required classification accuracy was not obtained in Chapter 4, further research was carried out for the classification of sugar beet and volunteer potato under ambient varying daylight conditions. Chapter 5 evaluated a transfer learning procedure with three different implementations of AlexNet (Part I), and then assessed the performance amongst different ConvNet architectures (Part II): AlexNet, VGG-19, GoogLeNet, ResNet-50, ResNet-101 and Inception-v3. In Part I, the highest classification accuracy (98.0%) was obtained with AlexNet in Scenario 2. In scenario 1 and 3, the highest classification accuracy of 97.0% and 97.3% were obtained, respectively. In Part II, the highest classification accuracy of 98.7% was obtained. This result, to the best of our knowledge, was considerably better than any other approaches mentioned in the literature for crop and weed classification. Transfer learning provided very promising performance for the classification of sugar beet and volunteer potato images under ambient varying light conditions. A deep learning approach based on ConvNet provided better performance than the one in Chapter 4, and satisfied the requirements. All procedures were feasible for real-time field applications (the classification time < 0.1 s).

    The full pipeline for weed detection consists of three steps: 1) vegetation segmentation, i.e. separating pixels in an image into plant pixels and non-plant pixels, 2) individual object identification, i.e. identification of individual plants (objects) in the set of plant pixels obtained after segmentation, and 3) classification of the plants into two classes, sugar beet (crop) and volunteer potato (weed).

    In this thesis, steps 1 and 3, i.e. image segmentation and classification of sugar beet/volunteer potato were successfully addressed. Step 2, the identification of individual plants in the images was not addressed. Despite this limitation, it can be concluded that significant progress has been made in this area of study, given the fact that reported algorithms were developed using images captured in full daylight with significant variations in light colour and intensity; a distinct challenge that so far has been circumvented by using hoods and artificial lighting. Yet, the question remains unanswered whether a full pipeline, including all three steps, would be able to meet the requirements identified at the onset of the research.

    With current hardware and suitable implementation of software, it seems that the requirement of 1 s per image for real-time operation of a weed control system can be attained. The highest classification accuracy of 98.7% obtained in Chapter 5 is supportive in meeting the required 95% control of volunteer potatoes, but when the ConvNet classification would be implemented in a full pipeline also containing vegetation segmentation and individual plant identification, a degraded performance can be expected. Given the fact that the images in this research were obtained under varying daylight conditions, the results showed potential of the proposed approach and compared favourably with classification results in the range of 85-90% that were obtained in various previous researches using hoods and artificial lighting. Therefore, it is safe to say that this research has laid the foundation for a small-sized robotic platform to come into action for weed control in the field.

    Transfer learning for the classification of sugar beet and volunteer potato under field conditions
    Suh, Hyun K. ; IJsselmuiden, Joris ; Hofstee, Jan W. ; Henten, Eldert J. van - \ 2018
    Biosystems Engineering 174 (2018). - ISSN 1537-5110 - p. 50 - 65.
    Automated weed control - Convolutional neural network - Deep learning - Transfer learning - Weed classification

    Classification of weeds amongst cash crops is a core procedure in automated weed control. Addressing volunteer potato control in sugar beets, in the EU Smartbot project the aim was to control more than 95% of volunteer potatoes and ensure less than 5% of undesired control of sugar beet plants. A promising way to meet these requirements is deep learning. Training an entire network from scratch, however, requires a large dataset and a substantial amount of time. In this situation, transfer learning can be a promising solution. This study first evaluates a transfer learning procedure with three different implementations of AlexNet and then assesses the performance difference amongst the six network architectures: AlexNet, VGG-19, GoogLeNet, ResNet-50, ResNet-101 and Inception-v3. All nets had been pre-trained on the ImageNet Dataset. These nets were used to classify sugar beet and volunteer potato images taken under ambient varying light conditions in agricultural environments. The highest classification accuracy for different implementations of AlexNet was 98.0%, obtained with an AlexNet architecture modified to generate binary output. Comparing different networks, the highest classification accuracy 98.7%, obtained with VGG-19 modified to generate binary output. Transfer learning proved to be effective and showed robust performance with plant images acquired in different periods of the various years on two types of soils. All scenarios and pre-trained networks were feasible for real-time applications (classification time < 0.1 s). Classification is only one step in weed detection, and a complete pipeline for weed detection may potentially reduce the overall performance.

    Sugar beet and volunteer potato classification using Bag-of-Visual-Words model, Scale-Invariant Feature Transform, or Speeded Up Robust Feature descriptors and crop row information
    Suh, Hyun K. ; Hofstee, Jan Willem ; IJsselmuiden, Joris ; Henten, Eldert J. van - \ 2018
    Biosystems Engineering 166 (2018). - ISSN 1537-5110 - p. 210 - 226.
    Bag-of-Visual-Words - Posterior probability - SIFT - SURF - Weed classification
    One of the most important steps in vision-based weed detection systems is the classification of weeds growing amongst crops. In the EU SmartBot project it was required to effectively control more than 95% of volunteer potatoes and ensure less than 5% of damage of sugar beet. Classification features such as colour, shape and texture have been used individually or in combination for classification studies but they have proved unable to reach the required classification accuracy under natural and varying daylight conditions. A classification algorithm was developed using a Bag-of-Visual-Words (BoVW) model based on Scale-Invariant Feature Transform (SIFT) or Speeded Up Robust Feature (SURF) features with crop row information in the form of the Out-of-Row Regional Index (ORRI). The highest classification accuracy (96.5% with zero false-negatives) was obtained using SIFT and ORRI with Support Vector Machine (SVM) which is considerably better than previously reported research although its 7% false-positives deviated from the requirements. The average classification time of 0.10–0.11 s met the real-time requirements. The SIFT descriptor showed better classification accuracy than the SURF, but classification time did not vary significantly. Adding location information (ORRI) significantly improved overall classification accuracy. SVM showed better classification performance than random forest and neural network. The proposed approach proved its potential under varying natural light conditions, but implementing a practical system, including vegetation segmentation and weed removal may potentially reduce the overall performance and more research is needed.
    'De optimale groenbemester is nog niet gevonden'
    Verhoeven, John - \ 2018
    Maisrassen moeten stevig staan
    Groten, Jos - \ 2018
    'Risico maiskopbrand onderschat'
    Groten, Jos - \ 2018
    Improved vegetation segmentation with ground shadow removal using an HDR camera
    Suh, Hyun K. ; Hofstee, Jan W. ; Henten, Eldert J. van - \ 2018
    Precision Agriculture 19 (2018)2. - ISSN 1385-2256 - p. 218 - 237.
    High dynamic range - Image processing - Shadow detection and remove - Vegetation segmentation - Weed control
    A vision-based weed control robot for agricultural field application requires robust vegetation segmentation. The output of vegetation segmentation is the fundamental element in the subsequent process of weed and crop discrimination as well as weed control. There are two challenging issues for robust vegetation segmentation under agricultural field conditions: (1) to overcome strongly varying natural illumination; (2) to avoid the influence of shadows under direct sunlight conditions. A way to resolve the issue of varying natural illumination is to use high dynamic range (HDR) camera technology. HDR cameras, however, do not resolve the shadow issue. In many cases, shadows tend to be classified during the segmentation as part of the foreground, i.e., vegetation regions. This study proposes an algorithm for ground shadow detection and removal, which is based on color space conversion and a multilevel threshold, and assesses the advantage of using this algorithm in vegetation segmentation under natural illumination conditions in an agricultural field. Applying shadow removal improved the performance of vegetation segmentation with an average improvement of 20, 4.4, and 13.5% in precision, specificity and modified accuracy, respectively. The average processing time for vegetation segmentation with shadow removal was 0.46 s, which is acceptable for real-time application (
    Characterization of the air flow and the liquid distribution of orchard sprayers
    Zande, J.C. van de; Schlepers, M. ; Hofstee, J.W. ; Michielsen, J.G.P. ; Wenneker, M. - \ 2017
    - p. 41 - 42.
    Nederlandse sojateelt zit in de lift
    Timmer, Ruud ; Visser, Chris de - \ 2017
    Quantification of simulated cow urine puddle areas using a thermal IR camera
    Snoek, Dennis ; Hofstee, Jan Willem ; Dueren den Hollander, Arjen W. van; Vernooij, Roel E. ; Ogink, Nico W.M. ; Groot Koerkamp, Peter W.G. - \ 2017
    Computers and Electronics in Agriculture 137 (2017). - ISSN 0168-1699 - p. 23 - 28.
    Adaptive threshold - Ammonia emission - Cow urine - Infrared camera - Puddle area

    In Europe, National Emission Ceilings (NEC) have been set to regulate the emissions of harmful gases, like ammonia (NH3). From NH3 emission models and a sensitivity analysis, it is known that one of the major variables that determines NH3 emission from dairy cow houses is the urine puddle area on the floor. However, puddle area data from cow houses is scarce. This is caused by the lack of appropriate measurement methods and the challenging measurement circumstances in the houses. In a preliminary study inside commercial dairy cow houses, an IR camera was successfully tested to distinguish a fresh urine puddle from its background to determine a puddle's area. The objective of this study was to further develop, improve and validate the IR camera method to determine the area of a warm fluid layer with a measurement uncertainty of <0.1 m2. In a laboratory set-up, 90 artificial, warm, blue puddles were created, and both an IR and a colour image of each puddle was taken within 5 s after puddle application. For the colour images, three annotators determined the ground truth puddle areas (Ap,GT). For the IR images, an adaptive IR threshold algorithm was developed, based on the mean background temperature and the standard deviation of all temperature values in an image. This IR algorithm was able to automatically determine the IR puddle area (Ap,IR) in each IR image. The agreement between the two methods was assessed. The Ap,IR underestimated the Ap,GT by 2.53% for which is compensated by the model Ap,GT=1.0253·Ap,IR. This regression model intercepted with zero and the noise was only 0.0651 m2, so the measurement uncertainty was <0.1 m2. In addition, the Ap,IR was not affected by the mean background temperature.

    Citizen science based symptom scores of allergic rhinitis to validate the grass pollen hay fever forecast
    Weger, L.A. ; Bas Hofstee, H. ; Vliet, A.J.H. van; Hiemstra, P.S. ; Sont, Jacob K. - \ 2015
    European Respiratory Journal 46 (2015)suppl 59. - ISSN 0903-1936
    Introduction: On average 23% of the European population suffers from allergic rhinitis of which pollen is a major cause. Hay fever symptom forecasts can help these patients to adapt their behaviour and to take their medication in time. We developed the LUMC hay fever forecast for grass pollen allergic patients based on local weather parameters (de Weger et al. Int J Biometeorol 2013). In this study we analysed to what extent symptom scores collected by the citizen science platform www.Allergieradar.nl (de Weger et al. Allergy 2014) are correctly predicted by the LUMC hay fever forecast.
    Colour temperature based colour correction for plant discrimination
    Hofstee, J.W. ; Jager, M.G. de - \ 2014
    Shadow-resistant segmentation based on illumination invariant image transformation
    Suh, H.K. ; Hofstee, J.W. ; Henten, E.J. van - \ 2014
    Robust plant image segmentation under natural illumination condition is still a challenging process for vision-based agricultural applications. One of the challenging aspects of natural condition is the large variation of illumination intensity. Illumination condition in the field continually changes, depending on the sunlight intensity, position, and moving clouds. This change affects RGB pixel values of acquired image and leads to inconsistent colour appearance of plant. Within this condition, plant segmentation based on RGB indices mostly produces poor threshold result. Besides, when shadows are presented in the scene, which is not uncommon in the field, plant segmentation becomes even more challenging. Excessive green (ExG) and other RGB indices have been widely used for plant image segmentation. Although ExG based segmentation is generally accepted as one of the most common and effective methods, it often provides poor segmentation results especially when the image scene contains an extreme illumination difference caused by dark shadows. To build an automated mobile weed control system, within the framework of the SmartBot project with the focus on the detection and control of volunteer potatoes in sugar beet, the vision-based system should first be able to detect plants out from the soil background even under dark shadow region. The objective of this research was to evaluate the segmentation robustness of illuminationinvariant transformation in comparison with ExG method under natural illumination conditions. Using illumination-invariant transformation, global and local thresholds (Otsu with reconstruction) were assessed to segment plant images. The ground shadow detection process was implemented to remove ground shadow region and background. Global threshold outperformed ExG, and local threshold could effectively remove the soil background region. Even under extreme illumination difference in a scene including sharp dark shadows due to bright sunshine, the illumination-invariant transformation produced robust segmentation results.
    Phenotyping large tomato plants in the greenhouse usig a 3D light-field camera
    Polder, G. ; Hofstee, J.W. - \ 2014
    In: Proceedings of the 2014 ASABE and CSBE/SCGAB Annual International Meeting. - ASABE - ISBN 9781632668455 - p. 153 - 159.
    Plant phenotyping is an emerging science that links genomics with functional plant characteristics. The recent availability of extremely fast high-throughput genotyping technologies has invoked high-throughput phenotyping to become a major bottleneck in the plant breeding programs. As a consequence new camera-based technologies to relieve the phenotyping bottleneck attract considerable attention. Whereas most plant phenotyping technologies are based on the approach to bring the plants to the image recording device, our system brings the camera-system to the plants creating great flexibility on observing plants under practical growing conditions. A new camera, based on lightfield technology was used for image recording. This single lens 3D camera is constructed by placing a micro lens array in front of the image sensor. This also allows the user to change the focus and the point of view after a picture has been taken. While the concept of such plenoptic cameras is known since 1908, only recently the increased computing power of low-cost hardware and the advances in micro lens array production, have made the application of plenoptic cameras feasible. Since this camera outputs a pixel to pixel registered color image and depth map, it solves limitations of common used techniques such as stereo vision and time of flight. During the summer of 2013 an experiment is carried out in a commercial tomato greenhouse in the Netherlands. In this paper first preliminary results are presented and the performance of lightfield technology for plant phenotyping is discussed.
    Aardappelopslagverwijderings-app : Volunteer
    Hofstee, J.W. ; Henten, E. van - \ 2014
    In: AgroBot - Grensverleggende innovaties voor de landbouw / van Werven, M., van Pol, J.H.G., van Haren, R., INERREG IVA project SmartBot - p. 8 - 9.
    BoniRob - multifunctioneel open robot platform
    Rahe, F. ; Henten, E. van; Hofstee, J.W. ; Ruckalshausen, A. - \ 2014
    In: AgroBot - Grensverleggende innovaties voor de landbouw / van Werven, M., van Pol, J.H.G., van Haren, R., INERREG IVA project SmartBot - p. 6 - 7.
    Detection of volunteer potato plants
    Nieuwenhuizen, A.T. ; Steen, S.P. van der; Hofstee, J.W. ; Henten, E.J. van - \ 2014
    precisielandbouw - akkerbouw - opslag (planten) - sensors - suikerbieten - cultuurplanten als onkruiden - proeven op proefstations - precision agriculture - arable farming - volunteer plants - sensors - sugarbeet - crop plants as weeds - station tests
    Within different experimental fields sugar beets and volunteer potato plants have been detected.
    Micro-sprayer for application of glyphosate on weed potato plants between sugar beets
    Nieuwenhuizen, A.T. ; Hofstee, J.W. ; Zande, J.C. van de; Henten, E.J. van - \ 2014
    precisielandbouw - akkerbouw - suikerbieten - gewasbescherming - cultuurplanten als onkruiden - onkruidbestrijding - opslag (planten) - aardappelen - sensors - chemische bestrijding - precision agriculture - arable farming - sugarbeet - plant protection - crop plants as weeds - weed control - volunteer plants - potatoes - sensors - chemical control
    Within different experimental fields sugar beets and volunteer potato plants have been detected.
    Check title to add to marked list
    << previous | next >>

    Show 20 50 100 records per page

     
    Please log in to use this service. Login as Wageningen University & Research user or guest user in upper right hand corner of this page.