- Farm Technology (19)
- Farm Technology Group (19)
- PE&RC (17)
- GTB Tuinbouw Technologie (7)
- WUR GTB Tuinbouw Technologie (7)
- GTB Teelt & Gewasfysiologie (3)
- WUR GTB Teelt & Gewasfysiologie (3)
- Laboratory of Geo-information Science and Remote Sensing (2)
- NVAO Programmes (2)
- WIAS (2)
- AFSG Cluster Marketing & Communicatie (FBR) (1)
- AFSG Directie Kenniseenheid (FBR) (1)
- Agro Field Technology Innovations (1)
- Agro Toegepaste Plantenecologie (1)
- CS Communication Services (1)
- CS Wageningen International (1)
- Communication Services (1)
- Directie Kenniseenheid (1)
- FBR Post Harvest Technology (1)
- IT Services Local IT Support (1)
- Innovation, Risk and Information Management (1)
- Innovation- and Risk Management and Information Governance (1)
- LEI Innovation, Risk and Information Management (1)
- Marketing & Communicatie (1)
- PPO/PRI AGRO Field Technology Innovations (1)
- PPO/PRI AGRO Toegepaste Plantenecologie (1)
- Post Harvest Technology (1)
- Regioteam (1)
- Wageningen International (1)
- Herman Bruyninckx (1)
- Johannes Dornheim (1)
- S. Dries van den (1)
- Ramon Haken (1)
- Steven Hell van (2)
- J. Hemming (3)
- E.J. Henten van (4)
- E.J. Henten Van (1)
- Eldert Henten van (1)
- Gook Hwan Kim (1)
- Joris IJsselmuiden (9)
- J. IJsselmuiden (2)
- J.M.M. IJsselmuiden (2)
- J.M.M. Ijsselmuiden (2)
- Joris Ijsselmuiden (2)
- Eldert J. Henten van (5)
- Peter Jongebloed (1)
- Sam K. Blaauw (1)
- Frits K. Evert van (1)
- Hyun K. Suh (2)
- Frans Kampers (1)
- Lammert Kooistra (1)
- G.W. Kootstra (1)
- Gert Kootstra (1)
- Pieter M. Blok (1)
- Joris M.M. IJsselmuiden (2)
- Jelle Maas (1)
- Desirée Meijer-Michielsen (1)
- M.J. Molengraft van de (1)
- N. Mylonas (1)
- Jeroen Nieuwenhuizen van den (1)
- Erik Pekkeriet (1)
- J.R. Pereira Valente (1)
- Robin Soetens (1)
- Vito Trianni (1)
- Bastiaan Vroegindeweij (2)
- B.A. Vroegindeweij (1)
- Jan W. Hofstee (1)
- Jan Willem Hofstee (1)
- Rick Zedde van der (1)
Robot navigation in orchards with localization based on Particle filter and Kalman filter
Blok, Pieter M. ; Boheemen, Koen van; Evert, Frits K. van; IJsselmuiden, Joris ; Kim, Gook Hwan - \ 2019
Computers and Electronics in Agriculture 157 (2019). - ISSN 0168-1699 - p. 261 - 269.
Autonomous robot navigation - Kalman filter - Orchard - Particle filter - Probabilistic localization
Fruit production in orchards currently relies on high labor inputs. Concerns arising from the increasing labor cost and shortage of labor can be mitigated by the availability of an autonomous orchard robot. A core feature for every mobile orchard robot is autonomous navigation, which depends on sensor-based robot localization in the orchard environment. This research validated the applicability of two probabilistic localization algorithms that used a 2D LIDAR scanner for in-row robot navigation in orchards. The first localization algorithm was a Particle filter (PF) with a laser beam model, and the second was a Kalman filter (KF) with a line-detection algorithm. We evaluated the performance of the two algorithms when autonomously navigating a robot in a commercial Dutch apple orchard. Two experiments were executed to assess the navigation performance of the two algorithms under comparable conditions. The first experiment assessed the navigation accuracy, whereas the second experiment tested the algorithms’ robustness. In the first experiment, when the robot was driven with 0.25 m/s the root mean square error (RMSE) of the lateral deviation was 0.055 m with the PF algorithm and 0.087 m with the KF algorithm. At 0.50 m/s, the RMSE was 0.062 m with the PF algorithm and 0.091 m with the KF algorithm. In addition, with the PF the lateral deviations were equally distributed to both sides of the optimal navigation line, whereas with the KF the robot tended to navigate to the left of the optimal line. The second experiment tested the algorithms’ robustness to cope with missing trees in six different tree row patterns. The PF had a lower RMSE of the lateral deviation in five tree patterns. In three out of the six patterns, navigation with the KF led to lateral deviations that were biased to the left of the optimal line. The angular deviations of the PF and the KF were in the same range in both experiments. From the results, we conclude that a PF with laser beam model is to be preferred over a line-based KF for the in-row navigation of an autonomous orchard robot.
|Inter-row Weed Detection of Sugar Beet Fields Using Aerial Imagery
Mylonas, N. ; Pereira Valente, J.R. ; IJsselmuiden, J.M.M. ; Kootstra, G.W. - \ 2018
Evaluation of the performance of PoultryBot, an autonomous mobile robotic platform for poultry houses
Vroegindeweij, Bastiaan A. ; Blaauw, Sam K. ; IJsselmuiden, Joris M.M. ; Henten, Eldert J. van - \ 2018
Biosystems Engineering 174 (2018). - ISSN 1537-5110 - p. 295 - 315.
Autonomous navigation - Floor egg collection - Mobile monitoring - Mobile robot - Performance evaluation - Poultry farming
Assessment of animal status, housing conditions and manually collecting floor eggs are the major daily tasks for poultry farmers. To assist the farmer in these tasks, PoultryBot, an autonomous mobile robot for use in poultry houses has been developed. In earlier research, several components of PoultryBot were discussed in detail. Here, performance of the robot is evaluated under practical conditions. For navigation, different paths were used to assess its navigation performance for various tasks, such as area sweeping and surveying close to walls. PoultryBot proved capable of navigating autonomously more than 3000 m, while avoiding obstacles and dealing with the hens present. The robustness of its navigation performance was tested by confronting PoultryBot with obstacles in different positions with respect to its path and using different settings of the navigation parameters. Both factors clearly influenced the driving behaviour of PoultryBot. For floor egg collection, detection and collection of eggs was assessed at 5 predefined egg positions lateral to the path of the robot. Over 300 eggs were tested; 46% were collected successfully, 37% was not collected successfully, and 16% were missed. The most observed failures occurred when the collection device was just next to the egg. It is thought that this problem can be solved by improving the control algorithm. The results demonstrate the validity of the PoultryBot concept and the possibility of autonomous floor egg collection in commercial poultry houses. Furthermore, they indicate that application of smart autonomous vehicles in dense animal environments is feasible.
Transfer learning for the classification of sugar beet and volunteer potato under field conditions
Suh, Hyun K. ; IJsselmuiden, Joris ; Hofstee, Jan W. ; Henten, Eldert J. van - \ 2018
Biosystems Engineering 174 (2018). - ISSN 1537-5110 - p. 50 - 65.
Automated weed control - Convolutional neural network - Deep learning - Transfer learning - Weed classification
Classification of weeds amongst cash crops is a core procedure in automated weed control. Addressing volunteer potato control in sugar beets, in the EU Smartbot project the aim was to control more than 95% of volunteer potatoes and ensure less than 5% of undesired control of sugar beet plants. A promising way to meet these requirements is deep learning. Training an entire network from scratch, however, requires a large dataset and a substantial amount of time. In this situation, transfer learning can be a promising solution. This study first evaluates a transfer learning procedure with three different implementations of AlexNet and then assesses the performance difference amongst the six network architectures: AlexNet, VGG-19, GoogLeNet, ResNet-50, ResNet-101 and Inception-v3. All nets had been pre-trained on the ImageNet Dataset. These nets were used to classify sugar beet and volunteer potato images taken under ambient varying light conditions in agricultural environments. The highest classification accuracy for different implementations of AlexNet was 98.0%, obtained with an AlexNet architecture modified to generate binary output. Comparing different networks, the highest classification accuracy 98.7%, obtained with VGG-19 modified to generate binary output. Transfer learning proved to be effective and showed robust performance with plant images acquired in different periods of the various years on two types of soils. All scenarios and pre-trained networks were feasible for real-time applications (classification time < 0.1 s). Classification is only one step in weed detection, and a complete pipeline for weed detection may potentially reduce the overall performance.
Sugar beet and volunteer potato classification using Bag-of-Visual-Words model, Scale-Invariant Feature Transform, or Speeded Up Robust Feature descriptors and crop row information
Suh, Hyun K. ; Hofstee, Jan Willem ; IJsselmuiden, Joris ; Henten, Eldert J. van - \ 2018
Biosystems Engineering 166 (2018). - ISSN 1537-5110 - p. 210 - 226.
Bag-of-Visual-Words - Posterior probability - SIFT - SURF - Weed classification
One of the most important steps in vision-based weed detection systems is the classification of weeds growing amongst crops. In the EU SmartBot project it was required to effectively control more than 95% of volunteer potatoes and ensure less than 5% of damage of sugar beet. Classification features such as colour, shape and texture have been used individually or in combination for classification studies but they have proved unable to reach the required classification accuracy under natural and varying daylight conditions. A classification algorithm was developed using a Bag-of-Visual-Words (BoVW) model based on Scale-Invariant Feature Transform (SIFT) or Speeded Up Robust Feature (SURF) features with crop row information in the form of the Out-of-Row Regional Index (ORRI). The highest classification accuracy (96.5% with zero false-negatives) was obtained using SIFT and ORRI with Support Vector Machine (SVM) which is considerably better than previously reported research although its 7% false-positives deviated from the requirements. The average classification time of 0.10–0.11 s met the real-time requirements. The SIFT descriptor showed better classification accuracy than the SURF, but classification time did not vary significantly. Adding location information (ORRI) significantly improved overall classification accuracy. SVM showed better classification performance than random forest and neural network. The proposed approach proved its potential under varying natural light conditions, but implementing a practical system, including vegetation segmentation and weed removal may potentially reduce the overall performance and more research is needed.
Object discrimination in poultry housing using spectral reflectivity
Vroegindeweij, Bastiaan A. ; Hell, Steven van; IJsselmuiden, Joris M.M. ; Henten, Eldert J. van - \ 2018
Biosystems Engineering 167 (2018). - ISSN 1537-5110 - p. 99 - 113.
To handle surrounding objects, autonomous poultry house robots need to discriminate between various types of object present in the poultry house. A simple and robust method for image pixel classification based on spectral reflectance properties is presented. The four object categories most relevant for the autonomous robot PoultryBot are eggs, hens, housing elements and litter. Spectral reflectance distributions were measured between 400 and 1000 nm and based on these spectral responses the wavelength band with lowest overlap between all object categories was identified. This wavelength band was found around 467 nm with an overlap of 16% for hens vs. eggs, 12% for housing vs. litter, and less for other combinations. Subsequently, images were captured in a commercial poultry house, using a standard monochrome camera and a band pass filter centred around 470 nm. In 87 images, intensity thresholds were applied to classify each pixel into one of four categories. For eggs, the required 80% correctly classified pixels was almost reached with 79.9% of the pixels classified correctly. For hens and litter, 40–50% of the pixels were classified correctly, while housing elements had lower performance (15.6%). Although the imaging setup was designed to function without artificial light, its optical properties influenced image quality and the resulting classification performance. To reduce these undesired effects on the images, and to improve classification performance, artificial lighting and additional processing steps are proposed. The presented results indicate both the simplicity and elegance of applying this method and are a suitable starting point for implementing egg detection with the robot.
Data synthesis methods for semantic segmentation in agriculture : A Capsicum annuum dataset
Barth, R. ; IJsselmuiden, J. ; Hemming, J. ; Henten, E.J. van - \ 2018
Computers and Electronics in Agriculture 144 (2018). - ISSN 0168-1699 - p. 284 - 296.
3D modelling - Agriculture - Robotics - Semantic segmentation - Synthetic dataset
This paper provides synthesis methods for large-scale semantic image segmentation datasets of agricultural scenes with the objective to bridge the gap between state-of-the art computer vision performance and that of computer vision in the agricultural robotics domain. We propose a novel methodology to generate renders of random meshes of plants based on empirical measurements, including the automated generation per-pixel class and depth labels for multiple plant parts. A running example is given of Capsicum annuum (sweet or bell pepper) in a high-tech greenhouse. A synthetic dataset of 10,500 images was rendered through Blender, using scenes with 42 procedurally generated plant models with randomised plant parameters. These parameters were based on 21 empirically measured plant properties at 115 positions on 15 plant stems. Fruit models were obtained by 3D scanning and plant part textures were gathered photographically. As reference dataset for modelling and evaluate segmentation performance, 750 empirical images of 50 plants were collected in a greenhouse from multiple angles and distances using image acquisition hardware of a sweet pepper harvest robot prototype. We hypothesised high similarity between synthetic images and empirical images, which we showed by analysing and comparing both sets qualitatively and quantitatively. The sets and models are publicly released with the intention to allow performance comparisons between agricultural computer vision methods, to obtain feedback for modelling improvements and to gain further validations on usability of synthetic bootstrapping and empirical fine-tuning. Finally, we provide a brief perspective on our hypothesis that related synthetic dataset bootstrapping and empirical fine-tuning can be used for improved learning.
Optimising Realism of Synthetic Agricultural Images using Cycle Generative Adversarial Networks
Barth, R. ; IJsselmuiden, J.M.M. ; Hemming, J. ; Henten, E.J. van - \ 2017
In: Proceedings of the IEEE IROS workshop on Agricultural Robotics / Kounalakis, Tsampikos, van Evert, Frits, Ball, David Michael, Kootstra, Gert, Nalpantidis, Lazaros, Wageningen : Wageningen University & Research - p. 18 - 22.
A bottleneck of state-of-the-art machine learning methods, e.g. deep learning, for plant part image segmentation in agricultural robotics is the requirement of large manually annotated datasets. As a solution, large synthetic datasets including ground truth can be rendered that realistically reflect the empirical situation. However, a dissimilarity gap can remain between synthetic and empirical data by incomplete manual modelling. This paper contributes to closing this gap by optimising the realism of synthetic agricultural images using unsupervised cycle generative adversarial networks, enabling unpaired image-to-image translation from the synthetic to empirical domain and vice versa. For this purpose, the Capsicum annuum (sweet- or bell pepper) dataset was used, containing 10,500 synthetic and 50 empirical annotated images. Additionally, 225 unlabelled empirical images were used. We hypothesised that the similarity of the synthetic images with the empirical images increases qualitatively and quantitively when translated to the empirical domain and investigated the effect of the translation on the factors color, local texture and morphology. Results showed an increased mean class color distribution correlation with the empirical dataset from 0.62 prior and 0.90 post translation of the synthetic dataset. Qualitatively, synthetic images translate very well in local features such as color,
illumination scattering and texture. However, global features like plant morphology appeared not to be translatable.
Monitoring and mapping with robot swarms for agricultural applications
Albani, Dario ; Ijsselmuiden, Joris ; Haken, Ramon ; Trianni, Vito - \ 2017
In: 2017 14th IEEE International Conference on Advanced Video and Signal Based Surveillance, AVSS 2017. - Institute of Electrical and Electronics Engineers Inc. - ISBN 9781538629390
Robotics is expected to play a major role in the agricultural domain, and often multi-robot systems and collaborative approaches are mentioned as potential solutions to improve efficiency and system robustness. Among the multi-robot approaches, swarm robotics stresses aspects like flexibility, scalability and robustness in solving complex tasks, and is considered very relevant for precision farming and large-scale agricultural applications. However, swarm robotics research is still confined into the lab, and no application in the field is currently available. In this paper, we describe a roadmap to bring swarm robotics to the field within the domain of weed control problems. This roadmap is implemented within the experiment SAGA, founded within the context of the ECORD++ EU Project. Together with the experiment concept, we introduce baseline results for the target scenario of monitoring and mapping weed in a field by means of a swarm of UAVs.
Synthetic bootstrapping of convolutional neural networks for semantic plant part segmentation
Barth, R. ; IJsselmuiden, J. ; Hemming, J. ; Henten, E.J. Van - \ 2017
Computers and Electronics in Agriculture (2017). - ISSN 0168-1699
Big data - Bootstrapping - Computer vision - Semantic segmentation - Synthetic dataset
A current bottleneck of state-of-the-art machine learning methods for image segmentation in agriculture, e.g. convolutional neural networks (CNNs), is the requirement of large manually annotated datasets on a per-pixel level. In this paper, we investigated how related synthetic images can be used to bootstrap CNNs for successful learning as compared to other learning strategies. We hypothesise that a small manually annotated empirical dataset is sufficient for fine-tuning a synthetically bootstrapped CNN. Furthermore we investigated (i) multiple deep learning architectures, (ii) the correlation between synthetic and empirical dataset size on part segmentation performance, (iii) the effect of post-processing using conditional random fields (CRF) and (iv) the generalisation performance on other related datasets. For this we have performed 7 experiments using the Capsicum annuum (bell or sweet pepper) dataset containing 50 empirical and 10,500 synthetic images with 7 pixel-level annotated part classes. Results confirmed our hypothesis that only 30 empirical images were required to obtain the highest performance on all 7 classes (mean IOU = 0.40) when a CNN was bootstrapped on related synthetic data. Furthermore we found optimal empirical performance when a VGG-16 network was modified to include à trous spatial pyramid pooling. Adding CRF only improved performance on the synthetic data. Training binary classifiers did not improve results. We have found a positive correlation between dataset size and performance. For the synthetic dataset, learning stabilises around 3000 images. Generalisation to other related datasets proved possible.
Stem detector for crops in a high-wire cultivation system
Soetens, Robin ; Molengraft, M.J. van de; Dries, S. van den; Bruyninckx, Herman ; Ijsselmuiden, J.M.M. ; Henten, E.J. van - \ 2016
Octrooinummer: WO2016184966, verleend: 2016-11-24.
A plant detection device is provided that includes a robotic arm having gripping element that includes first and second curved grippers with opposing concave surfaces that move between an open and closed states, and the arm moves and vibrates the gripping element, a proximity force sensor that is disposed on the gripper and outputs a measurement signal of a force between the gripping element and an outgrowth from a stem of a plant under test to a computer, a force and frequency sensor that is orthogonal to the proximity force sensor outputs a gripping force measurement and a frequency response measurement of the stem of the plant under test to the computer, where the computer moves the gripping and vibrating arm according to the sensor signal outputs.
Robotzwermen gaan onkruid te lijf
IJsselmuiden, Joris - \ 2016
Probabilistic localisation in repetitive environments : Estimating a robot's position in an aviary poultry house
Vroegindeweij, Bastiaan A. ; IJsselmuiden, Joris ; Henten, Eldert J. van - \ 2016
Computers and Electronics in Agriculture 124 (2016). - ISSN 0168-1699 - p. 303 - 317.
Mobile robotics - Parameter search - Performance assessment - Poultry housing - Probabilistic localisation
One of the problems in loose housing systems for laying hens is the laying of eggs on the floor, which need to be collected manually. In previous work, PoultryBot was presented to assist in this and other tasks. Here, probabilistic localisation with a particle filter is evaluated for use inside poultry houses. A poultry house is a challenging environment, because it is dense, with narrow static objects and many moving animals. Several methods and options were implemented and tested on data obtained with PoultryBot in a commercial poultry house. Although no animals were present, the localisation problem is still challenging here because of the repetitive nature of the poultry house interior, with its many narrow obstacles. Different parameter configurations were systematically evaluated, based on accuracy and applicability of the results. Estimated paths were quantitatively evaluated based on the Euclidian distance to a ground-truth determined with help of a total station. The presented system reached an accuracy of 0.37 m for 95% of the time, with a mean error of 0.2 m, making it suitable for localising PoultryBot in its future application.
De agrofoodsector heeft robots hard nodig : White paper
Beulens, Adrie ; Henten, Eldert van; IJsselmuiden, Joris ; Jongebloed, Peter ; Kampers, Frans ; Kooistra, Lammert ; Kootstra, Gert ; Maas, Jelle ; Meijer-Michielsen, Desirée ; Nieuwenhuizen, Jeroen van den; Pekkeriet, Erik ; Zedde, Rick van der - \ 2015
Wageningen : Wageningen UR - 17
Interaction analysis through fuzzy temporal logic : Extensions for clustering and parameter learning
Ijsselmuiden, Joris ; Dornheim, Johannes - \ 2015
In: AVSS 2015 - 12th IEEE International Conference on Advanced Video and Signal Based Surveillance. - Institute of Electrical and Electronics Engineers Inc. - ISBN 9781467376327
Adaptation models - Clustering algorithms - Cognition - Measurement - Noise - Semantics - Spatiotemporal phenomena
Interaction analysis is defined as the generation of semantic descriptions from machine perception. This can be achieved through a combination of fuzzy metric temporal logic (FMTL) and situation graph trees (SGTs). We extended the FMTL/SGT framework with modules for clustering and parameter learning and we showed their advantages. The contributions of this paper are 1) the combination of FMTL/SGT reasoning with a customized clustering algorithm, 2) a method for learning FMTL rule parameters, 3) a new FMTL/SGT model that implements some powerful fuzzy spatiotemporal concepts, and 4) evaluation of this system in a crisis response control room setting.
Object segmentation in poultry housings using spectral reflectivity
Vroegindeweij, B.A. ; Hell, Steven van; Ijsselmuiden, J.M.M. ; Henten, E.J. van - \ 2015
- 6 p.
poultry housing - object segmentation - spectral reflectivity
Fuzzy temporal logic, flexible methods for interaction analysis
IJsselmuiden, Joris - \ 2015
Journal of Ambient Intelligence and Smart Environments 7 (2015)3. - ISSN 1876-1364 - p. 391 - 392.
fuzzy metric temporal logic - group behavior - Interaction analysis - situation graph trees
On July 17 2014 in Karlsruhe Germany, Joris IJsselmuiden successfully defended his PhD thesis entitled "Interaction analysis in smart work environments through fuzzy temporal logic" . The examination committee consisted of Rainer Stiefelhagen, Jürgen Beyerer, Michael Beigl, Dorothea Wagner, Oliver Hummel, and Peter H. Schmitt (Fig. 1). The main publications associated with this PhD thesis are [2-9].
Look mum, no hands!
IJsselmuiden, Joris ; Vroegindeweij, Bastiaan - \ 2015
What a strange sight it will be: a vehicle without a driver. A bus that travels on public roads and across the campus with only passengers - how is that possible? Is it safe? And who can use this bus, or rather, who dares?
Kijk eens, zonder handen!
IJsselmuiden, Joris ; Vroegindeweij, Bastiaan - \ 2015
De busjes (het worden er twee) die deze truc gaan doen, luisteren naar de naam WEpod. WE staat voor Wageningen en Ede, de beide eindpunten van een van de busjes. De andere rijdt rondjes over de campus. Initiatiefnemer, de provincie Gelderland, ronkt dat het een wereldprimeur is. Elektrisch en autonoom rijden heeft volgens kenners de toekomst. De experimenten met zelfsturende of autonome auto’s buitelen op dit moment over elkaar heen. Rijden zonder handen is hot. Maar in al die auto’s zit een stuur. In feite gaat het om een hulptechniek voor de chauffeur, die de boel op elk moment over kan nemen. De WEpod is anders.