Food systems for healthier diets in Ethiopia : toward a research agenda
Gebru, Mestawet ; Remans, Roseline ; Brouwer, Inge ; Baye, Kaleab ; Melesse, M.B. ; Covic, Namukolo ; Habtamu, Fekadu ; Abay, Alem Hadera ; Hailu, Tesfaye ; Hirvonen, Kalle ; Kassaye, Tarik ; Kennedy, Gina ; Lachat, Carl ; Lemma, Ferew ; McDermott, John ; Minten, Bart ; Moges, Tibebu ; Reta, Fidaku ; Tadesse, Eneye ; Taye, Tamene ; Truebswasser, Ursula ; Berg, Marrit van den - \ 2018
International Food Policy Research Institute (IFPRI) (IFPRI Discussion Paper 01720) - 51 p.
Ethiopia - food systems - dietary diversity - nutrition
While dietary energy supply has improved, diets in Ethiopia remain low in diversity and provide insufficient amounts of protein, vitamin A, and zinc. Poor dietary quality contributes to the multiple burden of malnutrition in the country, with 38% stunting among children under five years and 24% anemia and 8% overweight among adult women.
Recent Ethiopian government policies and programs call for sustainable food systems approaches aimed at achieving better nutrition for all. Such food systems approaches imply actions that include but also go beyond agriculture to consider the many processes and actors involved in food production, processing, storage, transportation, trade, transformation, retailing, and consumption.
In this paper, we identify research streams to support the operationalizing of such food systems approaches in Ethiopia. To this end, we engaged with stakeholders, reviewed the literature, and applied a food systems framework to research priorities in the Ethiopian context. We develop an initial food systems profile of Ethiopia and identify 25 priority research questions, categorized into three main areas. A first area focuses on diagnosis and foresight research, for example, to further characterize dietary gaps and transitions in the context of the variety of Ethiopian settings, and to understand and anticipate which food system dynamics contribute positively or negatively to those trends. A second area includes implementation research and focuses on building a base of evidence on the dietary impact of combined demand-, market-, and supply-side interventions/innovations that focus on nonstaples; potential trade-offs in terms of economic, social, and environmental outcomes; and interactions between food system actors. A third area focuses on institutional and policy processes and explores enabling factors and private or public anchors that can take food systems approaches for healthier diets to a regional or national scale.
The paper contextualizes the case of Ethiopia within global food systems thinking and thereby aims to stimulate in- and cross-country learning.
Agricultural extension, technology adoption and household food security : evidence from DRC
Santos Rocha, Jozimo - \ 2017
Wageningen University. Promotor(en): E.H. Bulte, co-promotor(en): M.M. van den Berg. - Wageningen : Wageningen University - ISBN 9789463434485 - 231
agricultural extension - technology - adoption - food security - households - development economics - agricultural production - knowledge transfer - congo democratic republic - landbouwvoorlichting - technologie - adoptie - voedselzekerheid - huishoudens - ontwikkelingseconomie - landbouwproductie - kennisoverdracht - democratische republiek kongo
In this thesis, I use experimental and quasi-experimental data from 25 villages and a total of 1,105 farmers from eastern DRC to investigate the relationship among agricultural training, the adoption of agricultural technologies, crop productivity, and household food insecurity and dietary diversity. I present evidence that contributes to narrow the gap in the literature on the role of input subsidies fostering small-scale farmers' uptake of productivity-enhancing technologies, how farmer field school and farmer-to-farmer trainings affect the adoption of agricultural technologies, how F2F training may reduce the costs of FFS implementation, how adoption materializes on yields of food crops, and how training through the adoption of improved agricultural technologies impacts household food insecurity and the diet diversification of target households.
As a complement to econometric evidence and in order to understand the main findings, I also discuss behavioral features and farmer driven initiatives which somehow condition these impacts. Throughout the four main chapters, I identify practical implications that are highly important for the design and implementation of new programs and policies aimed to address agricultural productivity issues and reduce household food insecurity. In Chapter 1 I develop a general introduction to the research which discusses the evolution of agricultural extension in the last few decades, and describe FFS and F2F training methodologies. Chapter 2 provides a detailed description of the project intervention, technologies promoted, research settings and the data collection process. In Chapter 3, I report the results of an experimental study that analyses the impact of one-shot input starter packs on the adoption of productivity-enhancing complementary practices, which have the potential to maximize the impact of starter pack inputs. Additionally, I assess the levels of persistence on farmers’ use of improved crop seeds which are included in the starter packs. Overall, I find no evidence of starter packs’ impact on small-scale farmers’ adoption of productivity-enhancing technologies. Similarly, the levels of persistence regarding the use of seeds following the delivery of starter packs were not significant. These results are consistent with studies that have found minimal or no persistence on the use of inputs following the provision of subsidies, including Duflo, Kremer et al. (2011). The limited impact that starter packs had on yields in the first year may logically explain that farmers refrained from using improved seeds subsequently because the inputs are not economically attractive.
Chapter 4 studies the effectiveness of knowledge transmission from farmers trained in FFS through farmer-to-farmer training (F2F), which could potentially result in lower extension costs and higher impacts. I find that FFS training has a higher impact than F2F training in the first period, but the magnitude of the treatment effect in the second period is not statistically different between the two training methods. I argue that the dissemination of technologies promoted in FFS groups can well be formalized through farmer-to-farmer deliberate training attached to the FFS approach. Given the low costs of F2F training compared to FFS, the introduction of F2F training may substantially alleviate a major constraint to the large-scale introduction of FFS as a training method, its high costs.
In Chapter 5, I study the impact of farmer’s participation in FFS and F2F training on small-scale agricultural productivity. A multi-crop yield-index and the yields of cassava were used as impact indicators. The results indicate that both FFS and F2F trainings contribute to a significant increase in farmers’ yields, especially in the second period when the magnitude of the effect substantially increased. We also learned that the effect size does not differ between the two training approaches in neither period, suggesting that F2F communications are a suitable alternative or complement to FFS training. While the chapter was unable to confirm if training materializes in higher yields through technology adoption, I argue that in the context of the sample the adoption of productivity-enhancing practices and inputs are likely the most important impact mechanism.
I also study the relationship between agricultural training, the adoption of improved technologies and household food insecurity. I find that farmers’ participation in agricultural trainings has a positive effect, through the adoption of improved technologies, on improvements in household dietary diversity (HDDS). Nonetheless, the impact on household access to food (HFIAS) is less evident. These results suggest that FFS/F2F training can well reduce household food insecurity, which is mostly achieved through the adoption of improved agricultural technologies. Yet, there are farm and household specific factors which constrain how training impacts technology adoption and how adoption affect household food insecurity and diet diversification. In Chapter 7, I synthesize the results of the four main chapters and articulate the sequence of results from training to adoption to productivity to food security.
Integrated Water Resources Management: contrasting principles, policy, and practice, Awash River Basin, Ethiopia
Mersha, A. ; Fraiture, C.M.S. de; Mehari, Alem ; Masih, I. ; Alamirew, T. - \ 2016
Water Policy 18 (2016)2. - ISSN 1366-7017 - p. 335 - 354.
Integrated Water Resources Management (IWRM) has been a dominant paradigm for water sector reform worldwide over the past two decades. Ethiopia, among early adopters, has developed a water policy, legislations, and strategy per IWRM core principles. However, considerable constraints are still in its way of realization. This paper investigates the central challenges facing IWRM implementation in the Awash Basin analyzing the discrepancy between IWRM principles, the approach followed in Ethiopia and its practice in the Awash Basin. A decade and a half since its adoption, the Ethiopian IWRM still lacks a well-organized and robust legal system for implementation. Unclear and overlapping institutional competencies as well as a low level of stakeholders’ awareness on policy contents and specific mandates of implementing institutions have prevented the Basin Authority from fully exercising its role as the prime institute for basin level water management. As a result, coordination between stakeholders, a central element of the IWRM concept, is lacking. Insufficient management instruments and planning tools for the operational function of IWRM are also among the major hurdles in the process. This calls for rethinking and action on key elements of the IWRM approach to tackle the implementation challenges.
Burden of diarrhea in the eastern mediterranean region, 1990-2013 : Findings from the global burden of disease study 2013
Khalil, Ibrahim ; Colombara, Danny V. ; Forouzanfar, Mohammad Hossein ; Troeger, Christopher ; Daoud, Farah ; Moradi-Lakeh, Maziar ; Bcheraoui, Charbel El; Rao, Puja C. ; Afshin, Ashkan ; Charara, Raghid ; Abate, Kalkidan Hassen ; Abd El Razek, Mohammed Magdy ; Abd-Allah, Foad ; Abu-Elyazeed, Remon ; Kiadaliri, Aliasghar Ahmad ; Akanda, Ali Shafqat ; Akseer, Nadia ; Alam, Khurshid ; Alasfoor, Deena ; Ali, Raghib ; AlMazroa, Mohammad A. ; Alomari, Mahmoud A. ; Salem Al-Raddadi, Rajaa Mohammad ; Alsharif, Ubai ; Alsowaidi, Shirina ; Altirkawi, Khalid A. ; Alvis-Guzman, Nelson ; Ammar, Walid ; Antonio, Carl Abelardo T. ; Asayesh, Hamid ; Asghar, Rana Jawad ; Atique, Suleman ; Awasthi, Ashish ; Bacha, Umar ; Badawi, Alaa ; Barac, Aleksandra ; Bedi, Neeraj ; Bekele, Tolesa ; Bensenor, Isabela M. ; Betsu, Balem Demtsu ; Bhutta, Zulfiqar ; Abdulhak, Aref A. Bin; Butt, Zahid A. ; Danawi, Hadi ; Dubey, Manisha ; Endries, Aman Yesuf ; Faghmous, Imad M.D.A. ; Farid, Talha ; Farvid, Maryam S. ; Farzadfar, Farshad ; Fereshtehnejad, Seyed Mohammad ; Fischer, Florian ; Anderson Fitchett, Joseph Robert ; Gibney, Katherine B. ; Mohamed Ginawi, Ibrahim Abdelmageem ; Gishu, Melkamu Dedefo ; Gugnani, Harish Chander ; Gupta, Rahul ; Hailu, Gessessew Bugssa ; Hamadeh, Randah Ribhi ; Hamidi, Samer ; Harb, Hilda L. ; Hedayati, Mohammad T. ; Hsairi, Mohamed ; Husseini, Abdullatif ; Jahanmehr, Nader ; Javanbakht, Mehdi ; Beyene, Tariku ; Jonas, Jost B. ; Kasaeian, Amir ; Khader, Yousef Saleh ; Khan, Abdur Rahman ; Khan, Ejaz Ahmad ; Khan, Gulfaraz ; Khoja, Tawfik Ahmed Muthafer ; Kinfu, Yohannes ; Kissoon, Niranjan ; Koyanagi, Ai ; Lal, Aparna ; Abdul Latif, Asma Abdul ; Lunevicius, Raimundas ; Abd El Razek, Hassan Magdy ; Majeed, Azeem ; Malekzadeh, Reza ; Mehari, Alem ; Mekonnen, Alemayehu B. ; Melaku, Yohannes Adama ; Memish, Ziad A. ; Mendoza, Walter ; Misganaw, Awoke ; Ibrahim Mohamed, Layla Abdalla ; Nachega, Jean B. ; Nguyen, Quyen Le ; Nisar, Muhammad Imran ; Peprah, Emmanuel Kwame ; Platts-Mills, James A. ; Pourmalek, Farshad ; Qorbani, Mostafa ; Rafay, Anwar ; Rahimi-Movaghar, Vafa ; Ur Rahman, Sajjad ; Rai, Rajesh Kumar ; Rana, Saleem M. ; Ranabhat, Chhabi L. ; Rao, Sowmya R. ; Refaat, Amany H. ; Riddle, Mark ; Roshandel, Gholamreza ; Ruhago, George Mugambage ; Saleh, Muhammad Muhammad ; Sanabria, Juan R. ; Sawhney, Monika ; Sepanlou, Sadaf G. ; Setegn, Tesfaye ; Sliwa, Karen ; Sreeramareddy, Chandrashekhar T. ; Sykes, Bryan L. ; Tavakkoli, Mohammad ; Tedla, Bemnet Amare ; Terkawi, Abdullah S. ; Ukwaja, Kingsley ; Uthman, Olalekan A. ; Westerman, Ronny ; Wubshet, Mamo ; Yenesew, Muluken A. ; Yonemoto, Naohiro ; Younis, Mustafa Z. ; Zaidi, Zoubida ; Sayed Zaki, Maysaa El; Rabeeah, Abdullah A. Al; Wang, Haidong ; Naghavi, Mohsen ; Vos, Theo ; Lopez, Alan D. ; Murray, Christopher J.L. ; Mokdad, Ali H. - \ 2016
American Journal of Tropical Medicine and Hygiene 95 (2016)6. - ISSN 0002-9637 - p. 1319 - 1329.
Diarrheal diseases (DD) are leading causes of disease burden, death, and disability, especially in children in low-income settings. DD can also impact a child's potential livelihood through stunted physical growth, cognitive impairment, and other sequelae. As part of the Global Burden of Disease Study, we estimated DD burden, and the burden attributable to specific risk factors and particular etiologies, in the Eastern Mediterranean Region (EMR) between 1990 and 2013. For both sexes and all ages, we calculated disability-adjusted life years (DALYs), which are the sum of years of life lost and years lived with disability. We estimate that over 125,000 deaths (3.6% of total deaths) were due to DD in the EMR in 2013, with a greater burden of DD in low-and middle-income countries. Diarrhea deaths per 100,000 children under 5 years of age ranged from one (95% uncertainty interval [UI] = 0-1) in Bahrain and Oman to 471 (95% UI = 245-763) in Somalia. The pattern for diarrhea DALYs among those under 5 years of age closely followed that for diarrheal deaths. DALYs per 100,000 ranged from 739 (95% UI = 520-989) in Syria to 40,869 (95% UI = 21,540-65,823) in Somalia. Our results highlighted a highly inequitable burden of DD in EMR, mainly driven by the lack of access to proper resources such as water and sanitation. Our findings will guide preventive and treatment interventions which are based on evidence and which follow the ultimate goal of reducing the DD burden.
Modelling stable atmospheric boundary layers over snow
Sterk, H.A.M. - \ 2015
Wageningen University. Promotor(en): Bert Holtslag, co-promotor(en): Gert-Jan Steeneveld. - Wageningen : Wageningen University - ISBN 9789462572263 - 189
sneeuw - atmosferische grenslaag - modelleren - modellen - turbulentie - weersvoorspelling - snow - atmospheric boundary-layer - modeling - models - turbulence - weather forecasting
Modelling Stable Atmospheric Boundary Layers over Snow
Wageningen, 29th of April, 2015
The emphasis of this thesis is on the understanding and forecasting of the Stable Boundary Layer (SBL) over snow-covered surfaces. SBLs typically form at night and in polar regions (especially in winter), when radiative cooling at the surface causes a cooler surface than the overlying atmosphere and a stable stratification develops. This means that potential temperature increases with height and buoyancy effects suppress turbulence. Turbulence is then dominated by mechanical origin. If sufficient wind shear can be maintained, turbulence remains active, otherwise it will cease.
A proper representation of SBLs in numerical weather prediction models is critical, since many parties rely on these forecasts. For example, weather prediction is needed for wind energy resources, agricultural purposes, air-quality studies, and aviation and road traffic. Knowledge on SBLs is also essential for climate modelling. In the Arctic regions, climate change is most pronounced due to stronger changes in near-surface temperature compared to other latitudes. Though this `Arctic amplification' is not yet fully understood, possible responsible processes are the ice-albedo feedback, alterations in cloud cover and water vapour, different atmospheric and oceanic circulations, and the weak vertical mixing in the lower atmosphere. However, many interactions exist between these processes. With positive feedbacks, changes are even further enhanced. This could have worldwide consequences, i.e. due to affected atmospheric circulations and sea level rise with Greenland's melting ice-sheets.
Scientists try to explain the observed climate changes, as well as provide outlooks for future changes in climate and weather. However, the understanding is hampered by the fact that many model output variables (e.g. regarding the 2 m temperature) vary substantially between models on the one hand, and from observations on the other hand. Modelling the SBL remains difficult, because the physical processes at hand are represented in a simplified way, and the understanding of the processes may be incomplete. Furthermore, since processes can play a role at a very small scale, the resolutions in models may be too poor to represent the SBL correctly. Additionally, there are many different archetypes of the SBL. Turbulence can be continuous, practically absent, or intermittent, and vary in strength which affects the efficiency of the exchange of quantities horizontally and vertically.
Processes that are considered critical for the SBL evolution are e.g. turbulent mixing, radiative effects, the coupling between the atmosphere and the underlying surface, the presence of clouds or fog, subsidence, advection, gravity waves, and drainage and katabatic flows. In this thesis, the focus is on the first three processes, as these are most dominant for the evolution of the SBL (e.g. Bosveld et al., 2014b).
In Chapter 3 an idealized clear-sky case over sea-ice was studied based on the GABLS1 benchmark study (e.g. Cuxart et al., 2006), but extended by including radiative effects and thermal coupling with the surface. Hence the following research questions were posed:
Question 1: What is the variety in model outcome regarding potential temperature and wind speed profiles that can be simulated with one model by using different parametrization schemes?
Question 2: Which of the three governing processes is most critical in determining the SBL state in various wind regimes?
Question 3: Can we identify compensation mechanisms between schemes, and thus identify where possible compensating model errors may be concealed?
From the analysis with different parametrization schemes performed with the WRF single-column model (SCM, Question 1), it followed that quite different types of SBLs were found. Some schemes forecasted a somewhat better mixed potential temperature profile where stratification increased with height, while another scheme produced profiles with the strongest stratification close to the surface and stratification decreased with height. After only 9 h of simulation time, a difference in temperature of almost 2 K was found near the surface. Regarding the wind speed profile, some variation was found in the simulated low-level jet speed and height. Mainly the difference in atmospheric boundary-layer (BL) schemes which parametrize the turbulent mixing are responsible for these model output variations. A variation in long wave parametrization schemes hardly affected the model results.
Question 2 addresses the problem whether other processes than turbulent mixing may be responsible for a similar spread in model results. A sensitivity analysis was performed where for one set of reference parametrization schemes the intensity of the processes was adjusted. The relative sensitivity of the three processes for different wind regimes was analysed using `process diagrams'. In a process diagram, two physically related variables are plotted against each other, which in this case represent either a time average or a difference over time of the variable. A line connects the reference state with the state for which the process intensity is modified. By comparing the length and orientation of the lines, the relative significance of the individual processes for the different wind speeds can be studied. Overlapping line directions identify possible compensating errors.
Geostrophic wind speeds of 3, 8 and 20 m s-1 were selected representing low, medium and high wind speeds, capturing the range of wind speeds frequently occurring in the Arctic north of 75oN according to the ERA-Interim reanalysis dataset. Overall, a shift in relative importance was detected for the various wind regimes. With high geostrophic wind speeds, the model output is most sensitive to turbulent mixing. On the contrary, with low geostrophic wind speeds the model is most sensitive to the radiation and especially the snow-surface coupling. The impact of turbulent mixing is then minor, unless when mixing in both boundary layer and surface layer is adjusted. This stresses that proper linking between these two layers is essential.
Also with one set of parametrization schemes different SBL types were simulated. Potential temperature profiles were better mixed (increasing stratification with height) for high geostrophic wind speeds, and this tended to develop to profiles with the strongest stratification near the surface (decreasing stratification with height) for low geostrophic wind speeds. However, a variety in types was also found when keeping the same wind regime, but by varying the mixing strength. With enhanced mixing, the profile became better mixed, also when the reference profile showed the strongest stratification near the surface. With decreased mixing, profiles with a stronger stratification were found, again shaped with the strongest stratification near the surface. Thus a different mixing formulation has a strong impact on the vertical profiles, even when it may not necessarily strongly affect the surface variables. Therefore, it is recommended that when a model is evaluated and optimized, the vertical structure is also regarded in this process, since near-surface variables may be well represented, strong deviations aloft are still possible.
Furthermore, the process diagrams showed overlap in sensitivity to some processes. Therefore errors within the parametrizations of these processes could compensate each other and thus remain hidden (Question 3), making the model formulation possibly physically less realistic. This study did not reveal an unambiguous indication for the compensating processes regarding the various sets of variables, though overlap for single variables is seen.
This study also revealed a non-linear behaviour regarding the 2 m temperature, which is also found in observations (e.g. Lüpkes et al., 2008) and in a model study by McNider et al. (2012). Here the 2 m temperature decreased with enhanced mixing strength and increased with a lower mixing intensity. This counter-intuitive behaviour is explained by that mixing only occurs in a shallow layer close to the surface. Cold air that is mixed up by the enhanced mixing, is insufficiently compensated by the downward mixed relatively warm air. This behaviour was found mostly for low wind speeds or with decreased mixing at the medium wind regime, when the potential temperature profile showed the strongest stratification near the surface.
The study proceeds with a model evaluation against observations in low wind speed regimes. Three stably stratified cloud-free study cases with near-surface wind speeds below 5 m s-1 were selected with each a different surface: Cabauw in the Netherlands with snow over grass, Sodankylä in northern Finland with snow in a needle-leaf forest, and Halley in Antarctica with snow on an ice shelf.
Chapter 4 presents the evaluation of the WRF-3D and SCM for these cases. In this study, the WRF-3D model was used to determine the forcings, as often not all the required observations at high resolution in time and space are available. Hence the following questions were formulated:
Question 4: What is the performance of WRF in stable conditions with low wind speeds for three contrasting snow-covered sites?
Question 5: How should we prescribe the single-column model forcings, using WRF-3D?
The standard WRF-3D simulation had an incorrect representation of the snow-cover and vegetation fraction, which deteriorates the conductive heat flux, the surface temperatures and the SBL evolution. Indeed, Chapter 3 highlighted the critical role of the land-surface coupling representation. Adjusting the settings with site specific information, improved model simulations compared with the observations.
In general, the performance of WRF-3D was quite good for the selected cases, especially regarding the wind speed simulations. The temperature forecast proved to be more challenging. For Cabauw and Sodankylä, 2 m temperatures were strongly overestimated, though a better simulation was seen at higher tower levels. For Halley a better representation of the 2 m temperature was found, though aloft potential temperatures were underestimated. Hence, the three cases had an underestimated modelled temperature gradient in common.
This study also investigated how the forcing fields for the SCM should be prescribed. Model results for the three study cases all showed a significant deviation from the observed wind field without lateral forcings and time-invariant geostrophic wind speed. Including only a time-varying geostrophic wind speed did not improve the results. Prescribing additional momentum advection did have a positive impact on the modelled wind speed. The results regarding temperature, specific humidity and their stratification improved when temperature and humidity advection was also taken into account. Forcing the SCM field towards a prescribed 3D atmospheric state is not recommended, since unrealistic profiles were found below the threshold forcing height.
Having established the optimal model set-up, the SCM can be used as a tool to further study the small-scale processes for the three study cases, addressing the following questions:
Question 6: How do the model results with various process intensities compare with observations?
Question 7: Are any differences in relative process impacts found for the three contrasting sites?
Question 8: Does the model sensitivity vary between two different BL schemes?
The sensitivity analysis was performed with the WRF-SCM and repeated for two BL schemes. In general, the temperature and humidity stratifications intensified by decreasing the process strengths and hence were in better agreement with observations than the reference cases. The wind field was most sensitive to turbulent mixing, with a weaker low-level jet at a higher altitude for enhanced mixing and the opposite for less mixing, while the impact of the other processes was small. Contrary to the temperature profiles, a better agreement with wind observations was found with amplified mixing, except for Halley where results improved with reduced mixing.
Regarding the surface energy budget, the conductive heat flux was greatly overestimated at Cabauw due to an overestimated snow conductivity, while better agreements were found for the other sites. A revision of the definition for snow conductivity in the model is recommended, because rather large values were assumed for fresh snow, and indeed results improved when the coupling strength was reduced for Cabauw and Sodankylä. For Halley almost the same snow conductivity was modelled as was used to determine the observed conductive heat flux, however, then the temperature gradient through the first soil/snow layer was underestimated leaving the flux too small.
The net radiation was strongly too negative for the Cabauw and Halley case-studies. This is likely due to an underestimation of the incoming long wave radiation as part of a deficiency in the long wave radiation scheme. For all sites the sensible heat flux was overestimated, and decreased mixing improved the results. However, the eddy covariance measurements may have been made outside the constant flux layer, which hampers the model evaluation.
Though Question 6 aims to obtain understanding in which processes are most responsible for simulating model results that are in closer agreement with observations, measuring in these cold and dry circumstances is especially challenging. Furthermore, the measurements are mostly point measurements while the model grid represents a larger area, such that the measurements may be influenced by local features which are not captured in the model. These issues hinders a clear comparison of the model results with observations, and the observation uncertainties may be greater than what was represented in the process diagrams.
When comparing the process sensitivity for the different sites (Question 7), we found some distinct variations in relative process significance. The radiation impact was relatively large at Cabauw and Sodankylä where the specific humidity was higher such that a larger impact on the incoming long wave radiation can be obtained. The snow-surface coupling is more important at Halley. This is related to the higher snow cover at Halley compared to the other sites. Additionally, the conductivity of the underlying medium at Halley is set equal to that of snow. These two factors ensure that the impact of an altered snow conductivity is greater.
From the comparison of the sensitivity analyses for the two BL schemes (MYJ and YSU, Question 8), it followed that the overall direction of the sensitivity orientation is similar. However, stronger BL temperature stratifications were found with YSU, though between the surface and the first model level stronger stratifications were simulated with MYJ. This is related to the relatively high ratio of mixing in the boundary layer versus the surface layer with MYJ. Therefore the mixing in the BL is relatively more efficient and the surface layer cannot keep up the mixing to keep a smooth profile at the surface-layer / boundary-layer interface. This indicates the importance of a consistent transition between the BL and surface layer, as also pointed out by Svensson and Holtslag (2009). Furthermore, the non-linearity concerning the 2 m temperature behaviour discussed earlier is most profound with YSU, and not as obvious with MYJ due to a stronger implemented minimum diffusivity.
The results point towards the direction of focus for future research. This could be achieved by e.g. re-evaluating the snow representation, as well as investigating the long-standing problem of the underestimated long wave radiation. Additionally, the mixing seems to be too high in some of the simulations. As such, care should be taken in choosing the BL scheme and its constraints on the mixing, as these may hamper the development of the observed behaviour on non-linear near-surface temperature evolution for example.
Assessment of uncertainties in simulated European precipitation
Haren, R. van - \ 2015
Wageningen University. Promotor(en): Wilco Hazeleger, co-promotor(en): G.J. van Oldenborgh. - Wageningen : Wageningen University - ISBN 9789462572324 - 132
neerslag - simulatie - hydrologie - klimaatverandering - modellen - europa - precipitation - simulation - hydrology - climatic change - models - europe
The research presented in this thesis is aimed to understanding the changes and the simulation of precipitation in Europe. A correct representation of simulated (trends in) European precipitation is important to have confidence in projections of future changes therein. These projections are relevant for different hydrological applications. Among others, simulated changes of summer drying are often accompanied by an enhanced increase in air temperatures [Zampieri et al., 2009]. This can be expected to have large impacts on society and ecosystems, affecting, for example, water resources, agriculture and fire risk [Rowell, 2009]. Projections of changes in extreme precipitation are critical for estimates of future discharge extremes of large river basins, and changes in frequency of major flooding events [e.g. Kew et al., 2010].
Continuous light on tomato : from gene to yield
Velez Ramirez, A.I. - \ 2014
Wageningen University. Promotor(en): Harro Bouwmeester, co-promotor(en): Wim van Ieperen; Dick Vreugdenhil. - Wageningen : Wageningen University - ISBN 9789462570788 - 214
solanum lycopersicum - tomaten - licht - gewasproductie - gewasopbrengst - genen - tolerantie - lichtregiem - beschadigingen - plantenfysiologie - solanum lycopersicum - tomatoes - light - crop production - crop yield - genes - tolerance - light regime - injuries - plant physiology
Light essentially sustains all life on planet earth surface. Plants transform light energy into chemical energy through photosynthesis. Hence, it can be anticipated that extending the daily photoperiod, using artificial light, results in increased plant productivity. Although this premise is true for many plant species, a limit exists. For instance, the seminal work of Arthur et al. (1930) showed that tomato plants develop leaf injuries if exposed to continuous light (CL). Many studies have investigated the physiological mechanism inducing such CL-induced injury. Although important and valuable discoveries were done over the decades, by the time the present project started, a detailed and proven physiological explanation of this disorder was still missing. Here, I present the results of a 5-year effort to better understand the physiological basis of the CL-induced injury in tomato and develop the tools (genetic and conceptual) to cultivate tomatoes under CL.
After an exhaustive literature search, it was found that Daskaloff and Ognjanova (1965) reported that wild tomato species are tolerant to CL. Unfortunately, this important finding was ignored by numerous studies done after its publication. Here, we used the CL-tolerance found in wild tomatoes as a fundamental resource. Hence, the specific objectives of this thesis were to (i) better understand the physiological basis of the CL-induced injuries in tomato, (ii) identify the gene(s) responsible for CL-tolerance in wild tomato species, (iii) breed a CL-tolerant tomato line and (iv) use it to cultivate a greenhouse tomato crop under CL.
Chapter 1 describes how innovation efforts encountered the unsolved scientific enigma of the injuries that tomato plants develop when exposed to CL. The term CL-induced injury is defined, and a detailed description of the symptoms observed in this disorder is shown. Additionally, an overview of the most important studies, influencing the hypotheses postulated and/or tested in this dissertation, is presented. Finally, a description and motivation of the main questions that this dissertation pursued to answer is presented alongside a short description of the strategy chosen to answer them.
Chapter 2 reviews the literature, published over the last 80 years, on CL-induced injury using modern knowledge of plant physiology. By doing so, new hypotheses aiming to explain this disorder are postulated in addition to the ones collected from literature. Additionally, we highlight that CL is an essential tool for understanding the plant circadian clock, but using CL in research has its challenges. For instance, most of the circadian-clock-oriented experiments are performed under CL; consequently, interactions between the circadian clock and the light signalling pathway are overlooked. This chapter is published here.
Chapter 3 explores the benefits and challenges of cultivating CL-tolerant tomato under CL. Considering that current commercial tomato varieties need six hours of darkness per day for optimal growth, photosynthesis does not take place during a quarter of the day. Hence, if tomatoes could be grown under CL, a substantial increase in production is anticipated. A simulation study is presented, which shows that if an ideal continuous-light-tolerant tomato genotype is used and no crop adaptations to CL are assumed, greenhouse tomato production could be 26% higher when supplementing light to 24 h day-1 in comparison with a photoperiod (including supplementary lighting) of only 18 h day-1. In addition, the expected changes in greenhouse energy budgets and alterations in crop physiological responses that might arise from cultivating tomatoes under continuous light are discussed. This chapter is published here.
Chapter 4 maps the locus conferring CL-tolerance in wild tomatoes to chromosome seven, and shows that its introgression into modern tomato cultivars enhances yield by 20%, when grown under CL. In addition, genetic evidence, RNAseq data, silencing experiments and sequence analysis all point to the type III Light-Harvesting Chlorophyll a/b Binding protein 13 (CAB-13) gene as a major factor responsible for the tolerance. In Arabidopsis thaliana this protein is thought to have a regulatory role in balancing light harvesting by photosystems I and II. The likely mechanisms that link CAB-13 with CL-tolerance are discussed. This chapter is published here.
Chapter 5 investigates from which part of the plant CL-tolerance originates and whether this trait acts systemically. By exposing grafted plants bearing both tolerant and sensitive shoots to CL, the trait was functionally located to the shoot rather than the roots. Additionally, an increase in continuous-light tolerance was observed in sensitive plants when a continuous-light-tolerant shoot was grafted on it. Our results show that in order to increase yield in greenhouse tomato production by using CL, the trait should be bred into scion rather than rootstock lines.
Chapter 6 discusses the factors that differ between injurious and non-injurious light regimes. Each of these factors may potentially be responsible for triggering the injury in CL-grown tomato and was experimentally tested here. In short, these factors include (i) differences in the light spectral distribution between sunlight and artificial light, (ii) continuous signalling to the photoreceptors, (iii) constant supply of light for photosynthesis, (iv) constant photo-oxidative pressure, and (v) circadian asynchrony — a mismatch between the internal circadian clock frequency and the external light/dark cycles. The evidence presented here suggests that the continuous-light-induced injury does not result from the unnatural spectral distribution of artificial light or the continuity of the light per se. Instead, circadian asynchrony seems to be the factor inducing the injury. As the discovered diurnal fluctuations in photoinhibition sensitivity of tomato seedlings are not under circadian control, it seems that circadian asynchrony does not directly induce injury via photoinhibition as it has been proposed.
Chapter 7 investigates a possible role for phytochromes (PHY) in CL-induced injury in tomato. Mutant and transgenic tomato plants lacking or over-expressing phytochromes were exposed to CL, with and without far-red light enrichment, to test the role of individual phytochromes on the induction and/or prevention of injury. PHYA over-expression confers complete tolerance to CL regardless the light spectrum. Under CL with low far-red content, PHYB1 and PHYB2 diminished and enhanced the injury, respectively, yet the effects were small. These results confirm that phytochrome signaling networks are involved in the injury induction under CL. The link between CAB-13 and PHYA is discussed.
Chapter 8 investigates the role of carbohydrate accumulation in the induction of CL-induced injury in tomato by using untargeted metabolomics and transcriptomics data. These data reveal a clear effect of CL on sugar metabolism and photosynthesis. A strong negative correlation between sucrose and starch with the maximum quantum efficiency of photosystem II (Fv /Fm) was found across several abnormal light/dark cycles, supporting the hypothesis that carbohydrates play an important role in CL-induced injury. I suggest that CL-induced injury in tomato is caused by a photosynthetic down-regulation showing characteristics of both cytokinin-regulated senescence and light-modulated retrograde signaling. Molecular mechanisms linking carbohydrate accumulation with photosynthetic down-regulation are discussed.
Chapter 9 provides a synthesis of the most important findings and proposes a generic model of CL-induced injury in tomato. I propose that CL-induced injury in tomato arises from retrograde signals that counteract signals derived from the cellular developmental program that promote chloroplast development, such that chloroplast development cannot be completed, resulting in the chlorotic phenotype. Finally, perspectives on what future directions to take to further elucidate the physiological basis of this trait and successfully implement it in greenhouses are presented.
Matrix modulation of the toxicity of alkenylbenzenes, studied by an integrated approach using in vitro, in vivo, and physiologically based biokinetic models
Al-Husainy, W.A.A.M. - \ 2013
Wageningen University. Promotor(en): Ivonne Rietjens; Peter van Bladeren, co-promotor(en): Ans Punt. - Wageningen : Wageningen UR - ISBN 9789461738066 - 199
methyleugenol - toxiciteit - keukenkruiden - flavonoïden - methyl eugenol - toxicity - culinary herbs - flavonoids
Alkenylbenzenes such as estragole and methyleugenol are common components of spices and herbs such as tarragon, basil, fennel, mace, allspice, star anise and anise and their essential oils (Smithet al., 2002). There is an interest in the safety evaluation of alkenylbenzenes because these compounds can induce hepatic tumours in rodents when dosed orally at high dose levels (Milleret al., 1983; NTP, 2000). Based on the rodent studies with estragole, methyleugenoland structurally related alkenylbenzenes like safrole the hepatocarcinogenicity of alkenylbenzenes is ascribed to their bioactivation by cytochrome P450 enzymes leading to the formation of the proximate carcinogenen, the 1′-hydroxy metabolite, which is further bioactivated to the ultimate carcinogenen, the 1′-sulfooxy metabolite (Milleret al., 1983; Phillipset al., 1984; Randerathet al., 1984; Smithet al., 2010). The 1′-sulfooxy metabolite is unstable and binds via a presumed reactive carbocation intermediate covalently to different endogenous nucleophiles including DNA (Phillipset al., 1981; Boberget al., 1983; Milleret al., 1983; Phillipset al., 1984; Randerathet al., 1984; Fennellet al., 1985; Wisemanet al., 1987; Smithet al., 2002).
Because of their genotoxicity and carcinogenicity, the addition of estragole and methyleugenolas pure substances to foodstuffs has been prohibited within the European Union since September 2008 (European Commission, 2008). In 2008, the Joint FAO/WHO Expert Committee on Food Additives (JECFA) re-evaluated the safety of alkenylbenzenes and indicated that although evidence of carcinogenicity to rodents given high doses of alkenylbenzenes exists, further research is needed to assess the potential risk to human health at relevant dietary exposure levels (JECFA, 2008).
A significant difficulty in evaluating the toxicological data for alkenylbenzenes is that human exposure to these substances results from exposure to a complex mixture of food, spice, and spice oil constituents which may influence the biochemical fate and toxicological risk of the alkenylbenzenes. In this regard, it was shown that a methanolic extract of basil inhibited the formation of estragole DNA adducts in human HepG2 cells exposed to the proximate carcinogen 1′-hydroxyestragole (Jeurissenet al., 2008). This inhibition occurred at the level of sulfotransferase (SULT)-mediated bioactivation of 1′-hydroxyestragole into 1′-sulfooxyestragole (Jeurissenet al., 2008).
The objective of this PhD research was to study the inhibitory action of components in alkenylbenzene-containing herbs and spices on SULT-mediated alkenylbenzene DNA adduct formation and the consequences of this combination effect for risk assessment using estragole and methyleugenol as the model alkenylbenzenes. To achieve this objective, an integrated approach of in vitro, in vivo and physiologically based biokinetic (PBBK) models was applied to investigate how the SULT inhibition influences the bioactivation and thus potentially also the toxicity and risk assessment of estragole and methyleugenol.
Chapter 1of the thesis presents an introduction to the bioactivation, detoxification, genotoxicity and carcinogenicity of the alkenylbenzenes estragole and methyleugenol as well as a short introduction to PBBK modeling and the state-of-the-art knowledge on risk assessment strategies and regulatory status for alkenylbenzenes.
Chapter 2of the thesis identifies nevadensin as a basil constituent able to inhibit SULT-mediated DNA adduct formation in rat hepatocytes exposed to the proximate carcinogen 1′-hydroxyestragole and nevadensin. The type of inhibition by nevadensin was shown to be non-competitive with an inhibition constant (Ki) of 4 nM. Furthermore, nevadensin up to 20 μM did not inhibit 1′-hydroxyestragole detoxification by glucuronidation and oxidation. The inhibition of SULT by nevadensin was incorporated into the PBBK models describing bioactivation and detoxification of estragole in male rat and human. The models thus obtained predict that co-administration of estragole at a level inducing hepatic tumours in vivo (50 mg/kg bw) with nevadensin at a molar ratio to estragole representing the molar ratio of their occurrence in basil, results in more than 83% inhibition of the formation of the carcinogenic metabolite, 1ʹ-sulfooxyestragole, inthe liver of male rat and human even at 1% uptake of nevadensin.
To extend the work to other alkenylbenzene-containing herbs and spices than basil chapter 3 presents data showing that methanolic extracts from different alkenylbenzene-containing herbs and spices such as nutmeg, mace, anise and others are able to inhibit the SULT enzyme activity. Flavonoids including nevadensin, quercetin, kaempferol, myricetin, luteolin and apigenin were the major constituents responsible for this inhibition of SULT activity with Kivalues in the nano to sub-micromolar range. Also, the various flavonoids individually or in mixtures were able to inhibit estragole DNA adduct formation in human HepG2 cells exposed to the proximate carcinogen 1ʹ-hydroxyestragole, and to shift metabolism in favour of detoxification (e.g. glucuronidation) at the cost of bioactivation (e.g. sulfonation).
In a next step, the kinetics for SULT inhibition were incorporated in PBBK models for estragole in rat and human to predict the effect of co-exposure to estragole and (mixtures of) the different flavonoids on the bioactivation in vivo. The PBBK-model-based predictions indicate that the reduction of estragole bioactivation in rat and human by co-administration of the flavonoids is dependent on whether the intracellular liver concentrations of the flavonoids can reach their Ki values. Finally, we concluded that it is expected that this is most easily achieved for nevadensin which has a Kivalue in the nanomolar range and is, due to its methylation, more metabolically stable and bioavailable than the other flavonoids.
Chapter 4of the thesis investigates whether the previous observation that nevadensin is able to inhibit SULT-mediated estragole DNA adduct formation in primary rat hepatocytes could be validated in vivo. Moreover, the previously developed PBBK models to study this inhibition in rat and in human liver was refined by including a sub-model describing nevadensin kinetics. Nevadensin resulted in a significant reduction in the levels of estragole DNA adducts formed in the liver of Sprague–Dawley rats orally dosed with estragole and nevadensin simultaneously at a ratio reflecting their presence in basil. Moreover, the refined PBBK model predicted the formation of estragole DNA adducts in the liver of rat with less than 2-fold difference compared to in vivo data and suggests more potent inhibition in the liver of human compared to rat due to less efficient metabolism of nevadensin in human liver and intestine.
Also, an updated risk assessment for estragole was presented taking into account the matrix effect and this revealed that the BMDL10 and the resulting MOE for estragole increase substantially when they would be derived from rodent bioassays in which the animals would be exposed to estragole in the presence of nevadensin instead of to pure estragole.
To extend the work to other alkenylbenzenes than estragole chapter 5 of the thesis investigates the potential of nevadensin to inhibit the SULT-mediated bioactivation and subsequent DNA adduct formation of methyleugenolusing human HepG2 cells as an in vitro model. Nevadensin was able to inhibit SULT-mediated DNA adduct formation in HepG2 cells exposed to the proximate carcinogen 1′-hydroxymethyleugenol in the presence of nevadensin.To investigate possible in vivo implications for SULT inhibition by nevadensin on methyleugenolbioactivation, the rat PBBK model developed in our previous work to describe the dose-dependent bioactivation and detoxification of methyleugenolin male rat was combined with the recently developed PBBK model describing the dose-dependent kinetics of nevadensin in male rat. Similar to what was presented for estragole in chapter 4, chapter 5 presents an updated risk assessment for methyleugenoltaking the matrix effect into account. This revealed that the BMDL10 and the resulting MOE for methyleugenolincrease substantially when they would be derived from rodent bioassays in which the animals would be exposed to methyleugenolin the presence of nevadensin instead of to pure methyleugenol.
In a next step, we aimed at moving one step forward towards endpoints that are closer to initiation of carcinogenesis than DNA adduct formation, namely, formation of hepatocellular altered foci (HAF). Chapter 6 presents data showing that the potent in vivo inhibitory activity of nevadensin on SULT enzyme activity and on alkenylbenzene DNA adduct formation is accompanied by a potent in vivo reduction in early markers of carcinogenesis such as HAF. This also suggests that a reduction in the incidence of hepatocarcinogenicity is expected in liver of rodents when alkenylbenzenes would be dosed simultaneously with nevadensin.
Chapter 7presents a discussion on the in vitro and in vivo activity of dietary SULT inhibitors and their potential in reducing the cancer risk associated with alkenylbenzene consumption. This chapter also presents some future perspectives based on the major issues raised by our research.
Altogether, the results of the present thesis indicate that the likelihood of bioactivation and subsequent adverse effects may be lower when alkenylbenzenes are consumed in a matrix containing SULT inhibitors such as nevadensincompared to experiments using pure alkenylbenzenes as single compounds. Also,the consequences of the in vivo matrix effect were shown to be significant when estragole or methyleugenolwas tested in rodent bioassays in the presence of nevadensin at ratios detected in basil, thereby likely increasing BMDL10 and resulting MOE values substantially in a subsequent risk assessment. However, the results also indicate that matrix effects may be lower at daily human dietary exposure levels of estragole or methyleugenoland nevadensin resulting from basil consumption. Also, matrix effects seem to be limited in the presence of other SULT inhibiting dietary flavonoids even at high exposure levels of these flavonoids coming from supplements. This indicates that the importance of a matrix effect for risk assessment of individual compounds requires analysis of dose dependent effects on the interactions detected, an objective that can be achieved by using PBBK modeling.
Overall, the present study provides an example of an approach that can be used to characterise dose- species- and inter-individual differences as well as matrix effects in the risk assessment of food-borne toxicants present (e.g. alkenylbenzenes). In this approach the most important toxicokinetic interactions are addressed using an integrated strategy of in vitro, in vivo and PBBK modeling approaches.
Generation of in vitro data to model dose dependent in vivo DNA binding of genotoxic carcinogens and its consequences: the case of estragole
Paini, A. - \ 2012
Wageningen University. Promotor(en): Ivonne Rietjens; Peter van Bladeren, co-promotor(en): G. Scholz. - S.l. : s.n. - ISBN 9789461732224 - 168
carcinogenen - genotoxiciteit - dna-bindende motieven - in vitro - carcinogens - genotoxicity - dna binding motifs - in vitro
Our food contains several compounds which, when tested in isolated form at high doses in animal experiments, have been shown to be genotoxic and carcinogenic. At the present state-of-the-art there is no scientific consensus on how to perform the risk assessment of these compounds when present at low levels in a complex food matrix. In order to refine the evaluation of the risks associated with these food-borne genotoxic carcinogens, information on their mode of action (MOA) at low versus high doses, on species differences in toxicokinetics and toxicodynamics, including dose- and species- dependent occurrence of DNA damage and repair, and on effects on expression of relevant enzymes, is required. For modern Toxicology it is of importance to better understand the MOA of genotoxic carcinogens to which humans are exposed daily via the diet at low doses. Genotoxic compounds can be direct acting when the compound is reactive itself and binds to the molecular target, or may need to be bioactivated before interacting with the molecular target. Bioactivation of a compound usually proceeds by phase I and/or phase II enzymatic pathways. When a genotoxic compound binds covalently to DNA it can form adducts with the four bases of the double helix. For all of these compounds the biological effect is a sum of both kinetic (absorption, distribution, metabolism and excretion) and dynamic (ultimate reaction with the molecular target) mechanisms. The model genotoxic carcinogenic compound studied in the present thesis is estragole. Estragole is an alkenylbenzene, found in herbs and spices, to which humans are exposed at low doses via the diet. Once absorbed estragole can undergo detoxification and bioactivation through phase I and II enzymatic pathways, resulting in the compound being either excreted or converted to a reactive carbocation which binds covalently to DNA. Estragole is known to produce tumors in rodents exposed to high dose levels (Miller et al., 1983) and it has been characterisedas genotoxic and carcinogenic.
The aim of the present thesis was to develop new strategies for low dose cancer risk assessment of estragole by extending the PBBK models previously defined for estragole to so-called physiologically based biodynamic (PBBD) models for DNA adduct formation, taking the approach one step closer to the ultimate endpoint of tumorformation. Such models will facilitate risk assessment because they facilitate extrapolation from high to low dose levels, between species including human and between individuals. Furthermore, building the PBBD models predicting in vivo DNA adduct formation based on only in vitro parameters contributes to the 3Rs (Replacement, Reduction and Refinement) for animal testing.
Brushes and proteins
Bosker, W.T.E. - \ 2011
Wageningen University. Promotor(en): Martien Cohen Stuart, co-promotor(en): Willem Norde. - [S.l.] : S.n. - ISBN 9789085859178 - 142
biofilms - eiwitten - adsorptie - aangroeiwerende middelen - fabricage - biomaterialen - biofilms - proteins - adsorption - antifouling agents - manufacture - biomaterials
Brushes and Proteins
Wouter T. E. Bosker
Protein adsorption at solid surfaces can be prevented by applying a polymer brush at the surface. A polymer brush consists of polymer chains end-grafted to the surface at such a grafting density that the polymer chains stretch out into the solution. This is schematically shown in figure 1.
The main parameters determining the protein resistance of a brush are the grafting density (σ), the chain length (N) and the solvent quality. The thickness of the brush is a function of these parameters: H ~ N σ1/3.
This research is related to biofouling: 'the undesirable accumulation of proteins and cells at a surface', which starts by adsorption of proteins at the surface. Prevention of biofouling is of vital interest in medicine, where bacterial adhesion may cause severe infections on biomaterials used for implants. Treatment with antibiotics has hardly any effect. The only promising remedy against infections in this case is the prevention of a bacterial film. Because protein adsorption is the first step in this process, the research in this thesis is focused on prevention of protein adsorption by polymer brushes.
Numerous studies over the past decades revealed that neutral polymer brushes, especially from poly(ethylene oxide) (PEO), can minimize protein adsorption. Mindful of the parameters determining the adsorbed amount mentioned above, the following three mechanisms can be identified, displayed in figure 2. Primary adsorption occurs when the diameter of the protein is (much) smaller than the distance between the polymer chains. In case of secondary adsorption, the protein is (much) bigger than the distance between the polymer chains. Ternary adsorption results from an attraction between the proteins and the polymer chains in the brush and was first discovered by Currie et al. In 1999. For a considerable time researchers have assumed a repulsion between the proteins and the polymer chains, thereby neglecting the possible ternary adsorption. However, there is increasing evidence that this attraction occurs, especially with PEO brushes. This is highlighted in this research, by adsorption studies at bimodal PEO brushes, consisting of a dense PEO brush of short chains with a varying PEO brush of long chains.
primary, secondary and ternary adsorption.
The main objective of this research was to investigate whether polysaccharide brushes, in particular dextran brushes, could be prepared at a solid surface and to study their protein repellency. It was suggested that brushes from these natural polymers would be more successful to prepare nonfouling surfaces with. Dextran brushes were prepared using Langmuir-Blodgett deposition (LB) and PS-dextran diblock copolymers, illustrated in figure 1. With the LB method it is possible to control both σ and N. The synthesis of the PS-dextran diblock copolymers is described in the thesis as well as the interfacial behavior. Quasi-2D aggregation occurred at the air-water interface during preparation (compression of the PS-dextran monolayer, see figure 1), resulting in inhomogeneous dextran layers at low grafting density. At higher grafting density these aggregates were pushed together to form a homogeneous dextran brush, as illustrated by AFM images. This transition from inhomogeneous to homogeneous results in non-continuous adsorption behavior at dextran brushes, in contrast to PEO brushes, as demonstrated in figure 3.
In case of dextran brushes the adsorption of BSA is constant up to a specific σ, followed by a drastic decrease, while PEO brushes show a gradual reduction.Figure 3 also demonstrates that dextran brushes are as efficient as PEO brushes in preventing protein adsorption, at high σ. This is the main conclusion of this research. It is expected that at even higher σ dextran brushes will completely suppress protein adsorption.
Collagen-inspired self-assembling materals
Skrzeszewska, P.J. - \ 2011
Wageningen University. Promotor(en): Martien Cohen Stuart, co-promotor(en): Jasper van der Gucht; Frits de Wolf. - [S.l.] : S.n. - ISBN 9789085858690 - 151
zelf-assemblage - polypeptiden - biodegradatie - pichia pastoris - genetische modificatie - genetisch gemanipuleerde micro-organismen - aminozuursequenties - biomedische techniek - self assembly - polypeptides - biodegradation - pichia pastoris - genetic engineering - genetically engineered microorganisms - amino acid sequences - biomedical engineering
The rapid increase of the quality of life together with the progress of medical science asks for the development of new, tuneable and controllable materials. For the same reason, materials used for biomedical applications have to be increasingly biocompatible, biodegradable and biofunctional. Most of the available systems, however, lack one property or the other. For example, conventional animal-derived gelatin that is often used in biomedicine, is susceptible to a risk of contamination with prions or viruses and has a risk of bringing out allergic reactions, particularly against the non helix-forming domains of collagen . Furthermore, gelatin is composed of a variety of molecules and structures with different thermal stabilities and molecular sizes. This, in combination with the impossibility to change the molecular structure at will, limits the chances to elucidate the relation between the structure and function. On the other hand, synthetic materials that have a rather well-controlled size distribution often lack biocompatibility, biofunctionality or biodegradability. In addition to that, as their synthesis often requires toxic solvents, their application in the human body is restricted. All the drawbacks of the presently used materials have brought scientists towards a new approach in designing materials viz. genetic engineering. Rapid progress in recombinant techniques has led to new ways of producing molecules with well-defined composition and structure and with full control over the length and sequence of the biopolymer and its constituent blocks. These methods thus combine the advantages of natural and synthetic polymers. Using molecular biology tools, unique molecules can be created by merging in a desired manner naturally occurring self-assembling motifs such as elastin, silk or collagen [2-4], or entirely artificial fragments. As we show in this thesis, the precise control over the molecular design of these biotechnologically produced block polypeptides is extremely valuable as it also leads to control over their physicochemical properties.
In this thesis we present a new class of monodisperse, biodegradable and biocompatible network-forming block polymers that are produced by genetically modified strain of yeast, Pichia pastoris (Chapter 2). Trimer-forming end blocks, abbreviated as T, consisting of nine Pro-Gly-Pro amino acid triplets, are symmetrically flanking a random coil-like middle block composed of four or eight repeats of highly hydrophilic R or P sequences (Figure 8.1). R and P are identical with respect to length (99 amino acids) and composition but have different amino acid sequences. The P block has a glycine in every third position (as in collagen) but does not form any supramolecular structures and maintains a random coil-like conformation at any temperature . The R block is a shuffled version of the P block. Four recombinant gelatins are reported in this thesis, denoted as TR4T, TR8T, TP4T and TP8T (Figure 8.1). All of these were successfully produced with high yields (1-3 g/l of fermentation broth) by the Pichia pastoris GS115 strain transformed with a pPIC9 vector with the gene of interest in its expression cassette.
Figure 8.1 Schematic representation of collagen-inspired telechelic polypeptides: TR4T, TR8T, TP4T and TP8T.
In Chapter 3 we described the linear rheological properties of hydrogels formed by TR4T polypeptides. At a temperature of 50 °C, the solution does not show any viscoelastic response. However, upon cooling, the collagen-like trimer-forming domains (T) start to assemble into triple helical nodes and a well-defined network, with a node multiplicity of three, is formed. In the beginning of the gelation process, viscous properties are predominant, but as the network formation progresses, the elastic properties prevail. A plateau storage modulus is reached within a few hours. At this point the triple helices are in equilibrium with the free T blocks. An equilibrium or near-equilibrium state is reached, contrary to natural gelatin, because the collagen-like (T) assembling domains are relatively short and well-defined. The T blocks are solely responsible for the network formation. We have shown that a solution of the middle blocks only (i.e. R4) does not demonstrate any elastic response at any time and temperature. In addition, differential scanning calorimetry (DSC) (Chapter 5) proved that the collagen-like side blocks are near-quantitatively responsible for trimerization, as the observed melting enthalpies are in good agreement with values obtained by Frank et al.  for free (Pro-Gly-Pro)10 peptides. The equilibrium fraction of T blocks involved in triple helices shifts with temperature. By lowering the temperature, the fraction of triple helices increases, while the fraction of free ends decreases. There are two possibilities to form a triple helix. It can be formed either by three T blocks from three different chains, or by three T blocks from two different chains, so that two side blocks come from the same polypeptide. As a consequence, the network is composed of dangling ends, elastically active bridges and inactive loops (Figure 8.2). Because of the precisely-known junction multiplicity of three, we could develop an analytical model that links the internal structure of the gel, with dangling ends, loops, and bridges, to the physicochemical properties. This model uses a limited set of input parameters that can all be measured independently. It describes the experimental data quantitatively without further adjustable parameters. Using this model, we could show that the observed strong dependency of the storage modulus, the relaxation time and the viscosity on concentration and temperature is related to the changes in the number of loops, active bridges, and dangling ends in the gel matrix.
Figure 8.2 Network formation by collagen-inspired telechelic biopolymers.
In Chapter 4 we show that the number of intermolecular junctions and intramolecular loops depends not only on protein concentration and temperature but also on the length and the stiffness of the middle block. We synthesised new triblock copolymers with middle blocks, of different lengths and amino acid sequences, named TP4T, TR8T and TP8T (Figure 8.1). For all new proteins, there is a strong dependency of the storage modulus, the relaxation time and the viscosity on concentration and temperature (as for TR4T). However at comparable molar concentrations, the longer versions of polypeptides i.e. TR8T and TP8T show a significantly higher storage modulus and relaxation time than their counterparts TR4T and TP4T. This is because a longer middle block leads to a larger radius of gyration (Rg), which decreases the probability that two end blocks from the same molecule associate with each other, and form a loop. The consequence of fewer loops in the system is a higher storage modulus and a higher overall relaxation time.
Besides the effect of polymer length, we also observed that the R series, i.e. TR4T and TR8T, show a higher storage modulus than their P counterparts, i.e. TP4T and TP8T, at the same concentration and temperature. This can be explained by differences in coil flexibility. Although the P and R blocks have exactly the same amino acid composition, their amino acid sequence is different. Fitzkee et al.  have shown that even a polypeptide chain that assumes a random coil conformation still has locally folded conformations that contribute to the overall flexibility of the chain. This apparently leads to a smaller radius of gyration for the P middle block than for the R middle block and thus to a higher probability of loop formation.
Even though the melting behaviour obtained with DSC is the same for all four polypeptides (as the end blocks stay the same), the temperature at which the G0 value approaches zero and the gel completely loses its elastic properties varies with the length of the middle block. Shorter molecules, i.e. TP4T and TR4T, melt at lower temperatures. A solution of 1.2 mM TP4T melts at 298 K, while TP8T at a comparable molar concentration melts at a temperature which is 15 degrees higher. Furthermore, the R versions show slightly higher melting temperatures than the P versions. These differences in melting behaviour are related to the gel structure and the relative probabilities of forming intramolecular and intermolecular assemblies. We could account for these findings with the help of the analytical model presented in Chapter 3. The only parameter that had to be varied in the model was the coil size of the polymer, since the enthalpy and the melting temperatures of the triple helices did not change with the length of the middle block. The theoretical calculations clearly show that the molecules with smaller Rg form up to 30 % more loops than their bigger counterparts. Loops that act as gel stoppers do not contribute to the network elasticity and significantly lower the melting temperatures detected with rheology.
The network junctions in our gels are solely formed by triple helices. The mechanism of junction formation by the T blocks can be well-described by a two-step kinetic model (Chapter 5). Prior to triple helix propagation, a trimeric nucleus has to be formed. For dilute systems, nucleation is the limiting step, giving an apparent reaction order of three. These results indicate that only triple helices are stable. For more concentrated solutions, when nucleation is relatively fast, propagation of triple helices becomes rate-limiting and the apparent reaction order is close to unity. The propagation of triple helices is probably limited by cis-trans isomerization of peptide bonds, in which proline residues are involved.
Above overlap concentration (C*) the measured enthalpy for stable gels (~15 hours) indicates that almost 100 % of the T blocks are involved in triple helices. Values obtained by us are in good agreement with values obtained by Frank et al.  for single (Gly-Pro-Gly)10 peptides. Conversely, at concentrations below C*, the enthalpy per mole of protein is becoming less, suggesting that the fraction of free ends or mismatched helices becomes more pronounced. The apparent melting temperature increases slightly with increasing concentration. This can be explained on the basis of the reaction stoichiometry under equilibrium conditions [8, 9]. Except for the highest measured concentration (2.4 mM), the apparent melting temperature revealed a dependence on the scan rate, indicating that it was not possible to maintain equilibrium during the heating step. At a concentration of 2.4 mM concentration there is no scan rate dependence, since the melting occurs at a higher temperature, where the dissociation kinetics is faster [4, 10].
The kinetics of triple helix formation determines the rate of gel formation. The gelation starts when the first triple helical node is formed. At that time viscous properties (loss modulus) predominate, but as the network formation evolves the elastic response (storage modulus) becomes more pronounced. The storage modulus (G’) reaches a plateau value within a few hours. Changes in network structure and mechanical properties of the gel in time can be predicted from the kinetics of triple helix formation, using the model presented in Chapter 3. By comparing the kinetics obtained with rheology and with DSC we could see that for our system, the helix content is not simply proportional to the network progress and that the relation between the elastic properties (G’) and the helix content (pH) depends on the protein concentration. The reason for this concentration dependence is the formation of loops, which is more likely at low concentrations.
The investigated hydrogels undergo time-dependent macroscopic fracturing when a constant shear rate or shear stress is applied (start-up and creep experiments, respectively) (Chapter 6). Observations with particle image velocimetry (PIV) showed that in the beginning of a start-up (or creep) experiment the sample flows homogenously. After some time, the gel fractures, and is separated into two fractions. The inner region moves at the same velocity as the moving bob, while the outer fraction does not move at all. From the rate-dependence of the fracture strength we can conclude that gel fracture is due to stress-activated rupture of the triple helical nodes in the network.When the deformation is taken away, the gel can heal (Chapter 6). The capacity of self-healing is due to the transient character of the network nodes with a finite relaxation time. Such behaviour, impossible for most permanent gels, is highly desired in many applications, as hydrogels are often subjected to deformations, which easily go beyond the linear regime. As we present in Figure 8.3, TR4T gels cut into small pieces (grey and transparent), can heal within 2 hours. As measured with rheology the broken gel can recover up to 100 % of its initial elastic properties, even after several fracturing cycles. Interestingly, the kinetics of healing differs from the kinetics of fresh gel formation (Chapter 5). The latter is characterized by a lag-phase before elastic properties start to appear. This lag-phase occurs because at low degrees of crosslinking there is not yet a percolated network, so that the storage modulus is undetectable. By contrast, the recovery of the gel after rupturing is much faster and does not show a lag-phase. The elastic modulus, depending on the rupturing history, comes back to its initial value within 1-5 hours. These findings indicate that outside the fracture zone, the network nodes have not dissociated significantly, so that healing only requires the reformation of junctions that connect the undamaged pieces of the network (gel clusters).
Figure 8.3 Self-healing of TR4T hydrogels. (A) Pieces of broken gel. (B) Two gel pieces healed after 2 hours.
In Chapter 7 we demonstrated the shape-memory effects in hydrogels formed by permanently crosslinked TR4T molecules. The programmed shape of these hydrogels was achieved by chemical crosslinking of lysine residues present in the random coil. The chemical network could be stretched up to 200 % and “pinned” in a temporary shape by lowering the temperature and allowing the collagen-like end blocks to assemble into the physical nodes. The deformed shape of hydrogel can be maintained, at room temperature, for several days, or relaxed within few minutes upon heating to 50 ºC or higher. The presented hydrogels could return to their programmed shape even after several thermo-mechanical cycles, hence indicating that they remember the programmed shape. We have studied in more detail the shape recovery process by describing our hydrogels by a mechanical model composed of two springs and a dashpot. With the help of this model we showed that above the melting temperature of the triple helices, the recovery is exponential and that the decay time is roughly ten times slower than the relaxation of the physical network.
1.Biomedical applications - perspectives and considerations
The class of collagen-inspired self-assembling materials, which we present in this thesis are nice model systems for a systematic study of physical networks, but they also have a lot of potential for biomedical applications. In this section we discuss the possibilities for these self- assembling hydrogels in biomedicine.
Drug delivery systems
One of the major goals of modern medicine is to ensure that the required amount of an active substance is available at the desired time at the desired location in the body. Consequently, a lot of effort is put into designing delivery systems with precisely adapted release profiles, sensitive to external stimuli such as temperature or pH. A frequently used group of materials in this field are hydrogels, both chemically and physically crosslinked. In the case of covalently crosslinked networks the release of the drug is mostly via diffusion of the drug out of the gel particle after it swells. The rate of drug release is governed by the resistance of the network to volume increase . Although permanent networks are widely used as drug carriers they have some disadvantages such as incomplete release of active substances and poor biodegradability in the body. The problem can be partially solved by introducing enzymatic cleavage or hydrolysis sites into the main chain, but still the hydrogel erosion cannot be precisely controlled and complete material degradation can not be guaranteed.
These obstacles can be overcome by using physical hydrogels that are formed by weak interactions. These can dissociate in a controlled manner and completely release the active component. In contrast to chemical gels, erosion of physical gels occurs spontaneously. The erosion rate is determined by the life time of the junctions, but it depends also on the relative amount of intramolecular loops and intermolecular junctions, as demonstrated by Shen et al. . These authors showed that, by using triblock polymers with dissimilar coiled-coil side domains rather than identical ones, loop formation could be suppressed, leading to a lower erosion rate .
The potential of our gels for drug delivery applications was tested by Teles et al. . It was shown that trapped proteins (BSA) can be completely released from TR4T and TR8T gels, both at 37 ºC and 20 ºC. The release at 37 ºC from 20 % gels was completed within 48 hours while at 20 ºC it took about 5 times longer. At body temperature the release was mostly driven by dissociation of trimeric junction and dissolution of the separate polymer chains (gel erosion). At 20 ºC the junction life time was long so that erosion was slower and swelling and diffusion played a more important role. The observations of Teles are in agreement with studies of several groups that demonstrated the importance of hydrogel erosion for controlled release [12, 14].
The erosion rate of physical hydrogels is governed by the junction relaxation time. The mean relaxation time of transient networks can be manipulated either by varying the gel architecture (Chapter 3 and 4) or by changing the relaxation time of a single triple helix. The gel architecture (i.e. the number of loops and bridges) can be altered either as we show in Chapter 3 by changing the protein concentration or as we demonstrate in Chapter 4 by manipulating the design of the middle block. The number of loops becomes lower as the spacer length and stiffness increase (Chapter 4). The lifetime of a single node can be changed by enzymatic hydroxylation of proline to hydroxyproline,  which leads to more hydrogen bonds among adjacent T blocks, or by changing the length of the collagen-like T domains. Preliminary results showed that average relaxation time of the network is roughly hounded times higher for molecules with collagen-like domains composed of sixteen Pro-Gly-Pro repeats instead of nine (unpublished data).
For these biotechnologically produced collagen-inspired polymers, the length or the composition of the blocks can be changed simply by changing the DNA template. This, in combination with the model elaborated in Chapter 3 and 4 that links the internal gel architecture with the physicochemical gel behaviour, gives ample possibilities to design materials with custom-desired release profiles of active components.
Materials for tissue engineering scaffolds have to mimic the in vivo extracellular matrix environment. They provide physical support, but also have to guarantee proper adhesion of cells and controlled release of growth factors. A very important role in scaffolds design is played by the mechanical properties of the matrix [16-18]. As shown by Engler et al. , the elasticity of the matrix directs stem cell development to different lineages. Soft networks (0.1-1 kPa), which mimic brain tissue, promote neuron development, stiffer scaffolds (8-17 kPa) are myogenic, while gels with an elastic modulus of 24-40 kPa promote growth of bone cells. The stiffness of the matrix affects focal-adhesions and the organization of the cytoskeleton structure, and thus contractility, motility and spreading [16, 18]. Another significant factor, which plays a role in tissue growth is the degradation rate of the scaffold. The degradation should be synchronized with cellular repair in such a way, that tissue replaces the material within the desired time interval. The scaffold disintegration also controls the release of growth factors. For naturally derived materials such as alginate, the degradation rate could be influenced by partial oxidation of the polymer chain or via a bimodal molecular weight distribution . For synthetic polymers different degradation profiles can be realized by incorporating in the polymer backbone groups with different susceptibility to hydrolysis .
Presently the most widely used scaffolds for tissue engineering are natural polymers such as collagen, gelatin, and polysaccharides  or synthetic, biodegradable polymers such as poly (L-lactic acid) (PLLA), poly(glycolic acid) (PGA), and poly(ethylene glycol) (PEG). [21-23]. Although these materials show promising properties, their use is limited as they suffer from batch to batch variations, polydispersity, viral contamination, allergic reactions or toxic byproducts after degradation. Also their mechanical properties are poorly-controlled and it is difficult to relate the molecular structure to the resulting properties. Furthermore, in the case of synthetic polymers, there is no intrinsic mechanism to interact with cells and to propagate cell adhesion proliferation or migration. This problem can be partially solved by functionalizing synthetic materials with bioactive molecules, such as collagen  or short peptides (for example arginine-glycine-aspartic (RGD) or tyrosine-isoleucine-glycine-serine-arginine (YIGSR) ). It remains difficult, however, to precisely control the spatial distribution, of these biofunctional domains .
A very promising alternative for the currently used scaffolds are hydrogels formed by self-assembling protein polymers [2, 26-28], including the collagen-inspired polypeptides presented in this thesis. Our block polymers form physical gels with precisely controlled elastic properties. As discussed in Chapter 3 and 4, the gel structure and the resulting mechanical properties strongly depend on concentration, temperature and on the molecular design of the polymer. Within the investigated range of conditions our gels have an elastic modulus between 0.03 and 5 kPa. Thus they seem most appropriate for neuron cell growth . Moreover, it is also possible to incorporate specific short adhesive peptide sequences (such as RGD) in the middle block to improve attachment and cell propagation.
The presently investigated proteins, with T domains composed of nine Pro-Gly-Pro repeats, still need some enhancement in terms of stability. As shown by Teles et al. the currently available molecules erode within 2 days . For tissue engineering applications, this is too fast. We therefore propose some strategies to stabilize our hydrogels. A first possibility alternative is to introduce amino acids which can form chemical bonds such as cysteines that can form disulfide bridges under oxidizing conditions , or lysines, which can be functionalized with acrylate and then photo-crosslinked with UV radiation [30-32]. However, one has to be aware that this additional procedure may have negative side effects such as toxic byproducts, incomplete polymer degradation in the body, or loss of responsiveness to external stimuli. Alternatively, the erosion can be moderately slowed down by increasing the relaxation time of the network (as discussed in section on drug delivery systems).
Wound dressing materials
Under normal circumstances wound healing is a very long process. In order to speed it up, so that bacterial infections or wound dehydration can be avoided, wound dressing materials are used [33-37]. These materials should fulfil several general requirements such as biocompability, ease of application and removal, proper adherence (to avoid fluid pockets, in which bacteria could proliferate), ease in gas exchange between tissue and environment, and controlled release of active components such as antimicrobial agents or wound repair agents (for example Epidermal Growth Factor (EGF)) .
All above-mentioned requirements can be fulfilled by the collagen-inspired hydrogels presented in this thesis. The advantage of our materials is that they can follow the contour of the wound and entirely fill it, thus forming an efficient barrier for microbes, but at the same time being permeable for water vapour and oxygen. Furthermore they can entrap active components and release them in a controlled way during the healing process, as discussed above. Depending on the circumstances, the release profile can be synchronised with the wound healing process. An additional advantage of our genetically engineered molecules is that adhesion domains can be introduced along the middle block, assuring better integration of the gel with the damaged tissue.
8.3 Final conclusions and outlook
In this final chapter we have discussed the potential of our collagen-inspired materials in biomedical applications. They are biocompatible and biodegradable, whilst offering numerous possibilities to change the molecular design in order to meet the desired mechanical or biological properties. Furthermore, the well-defined nature of the triple helical junctions allows us to predict the mechanical properties of the gel from the molecular design of polypeptides. This exclusive feature of our system makes it unique and offers great flexibility to design custom biomedical materials.
Biomedical needs, however, are very variable and often require an individual approach. That is why in our group we have created a family of genetically engineered block copolypeptides. Besides collagen we use other motifs present in nature, such as silk or elastin. We can combine these motifs in various ways in order to create unique stimuli-responsive (often multi-responsive) molecules that can meet individual application needs.
The silk-like domains consist of (Gly-Ala-Gly-Ala-Gly-Ala-Gly-Xxx)n repeats. Position Xxx is occupied by charged amino acids such as histidine, lysine or glutamic acid. When the charge is screened, the molecules assemble, forming first β sheet-like secondary structures, and then long fibres. As shown by Martens et al. , block polymers comprising silk-like domains with glutamic acid or histidine in the Xxx position form fibre-like gels at a pH of 2 or 12, respectively. They also assemble when mixed with oppositely charged (coordination) polymers [3, 38]. Probably, the assembling conditions can be tuned even more precisely by adjusting the isoelectric point of the assembling domain. This will allow the production of hydrogels that are formed after being injected into the body, while they disassemble (releasing the drug) when exposed to the acidic or the alkaline conditions. The nanofibre gels are also stable enough to serve as scaffolds for tissue engineering [39-41].
Another motif that has been used is elastin. It consists of (Val-Pro-Gly-Xxx-Gly)n repeats and it self-assembles above a lower critical solution temperature (LCST). The transition temperature can be tuned by introducing more or less polar amino acid residues in position Xxx. By combining elastin-like or collagen-like blocks with silk-like blocks, thermo and pH responsive networks can be obtained. This may allow us toswitch from fibre-like gels to associative networks.
Block polypeptides, produced using recombinant techniques, besides biocompability and biodegradability offer many possibilities to adjust the molecular design in will, to realize the desired mechanical or biological properties. Three dimensional structures with different thermal stabilities can be programmed by combining in a precise manner various amino acid sequences. The obtained materials can respond to external stimuli such as pH, ionic strength or temperature. They can also carry peptides fragments that can enhance cells adhesion and proliferation or induce crystallization.
The new approach in material science, which we present in this thesis, opens a new world of polymers, in which the main constraint is imagination.
 European Commission, Updated opinion on the safety with regards to TSE risks of gelatine derived from ruminant bones or hides. (2003).
 E.R. Wright, V.P. Conticello, Self-assembly of block copolymers derived from elastin-mimetic polypeptide sequences. Advanced Drug Delivery Reviews 54(8) (2002) 1057-1073.
 A.A. Martens, J. van der Gucht, G. Eggink, F.A. de Wolf, M.A.C. Stuart, Dilute gels with exceptional rigidity from self-assembling silk-collagen-like block copolymers. Soft Matter 5(21) (2009) 4191-4197.
 P.J. Skrzeszewska, F.A. de Wolf, M.W.T. Werten, A.P.H.A. Moers, M.A. Cohen Stuart, J. van der Gucht, Physical gels of telechelic triblock copolymers with precisely defined junction multiplicity. Soft Matter 5(10) (2009) 2057-2062.
 M.W.T. Werten, H. Teles, A. Moers, E.J.H. Wolbert, J. Sprakel, G. Eggink, F.A. de Wolf, Precision Gels from Collagen-Inspired Triblock Copolymers. Biomacromolecules 10(5) (2009) 1106-1113.
 S. Frank, R.A. Kammerer, D. Mechling, T. Schulthess, R. Landwehr, J. Bann, Y. Guo, A. Lustig, H.P. Bachinger, J. Engel, Stabilization of short collagen-like triple helices by protein engineering. Journal of Molecular Biology 308(5) (2001) 1081-1089.
 N.C. Fitzkee, G.D. Rose, Reassessing random-coil statistics in unfolded proteins. Proceedings of the National Academy of Sciences of the United States of America 101(34) (2004) 12497-12502.
 J. Engel, H.T. Chen, D.J. Prockop, H. Klump, Triple helix reversible coil conversion of collagen-like polypeptides in aqueous and non aqueous solvents - comparison of thermodynamic parameters and binding of water to (L-Pro-L-Pro-Gly)n and (L-Pro-L-Hyp-Gly)n Biopolymers 16(3) (1977) 601-622.
 A.V. Persikov, Y.J. Xu, B. Brodsky, Equilibrium thermal transitions of collagen model peptides. Protein Science 13(4) (2004) 893-902.
S. Boudko, S. Frank, R.A. Kammerer, J. Stetefeld, T. Schulthess, R. Landwehr, A. Lustig, H.P. Bachinger, J. Engel, Nucleation and propagation of the collagen triple helix in single-chain and trimerized peptides: Transition from third to first order kinetics. Journal of Molecular Biology 317(3) (2002) 459-470.
P. Gupta, K. Vermani, S. Garg, Hydrogels: from controlled release to pH-responsive drug delivery. Drug Discovery Today 7(10) (2002) 569-579.
W. Shen, K.C. Zhang, J.A. Kornfield, D.A. Tirrell, Tuning the erosion rate of artificial protein hydrogels through control of network topology. Nature Materials 5(2) (2006) 153-158.
H. Teles, T. Vermonden, G. Eggink, W.E. Hennink, F.A. de Wolf, Hydrogels of collagen-inspired telechelic triblock copolymers for sustained release of proteins. Journal of Controlled Release 147(2) (2010) 298-303.
K.S. Anseth, A.T. Metters, S.J. Bryant, P.J. Martens, J.H. Elisseeff, C.N. Bowman, In situ forming degradable networks and their application in tissue engineering and drug delivery. Journal of Controlled Release 78(1-3) (2002) 199-209.
R.E. Rhoads, Udenfrie.S, Bornstei.P, In vitro enzymatic hydroxylation of prolyl residues in alpha1-CB2 fragment of rat collagen. Journal of Biological Chemistry 246(13) (1971) 4135-&.
D.E. Discher, P. Janmey, Y.L. Wang, Tissue cells feel and respond to the stiffness of their substrate. Science 310(5751) (2005) 1139-1143.
A.J. Engler, S. Sen, H.L. Sweeney, D.E. Discher, Matrix elasticity directs stem cell lineage specification. Cell 126(4) (2006) 677-689.
R.J. Pelham, Y.L. Wang, Cell locomotion and focal adhesions are regulated by substrate flexibility. Proceedings of the National Academy of Sciences of the United States of America 94(25) (1997) 13661-13665.
G. Chan, D.J. Mooney, New materials for tissue engineering: towards greater control over the biological response. Trends in Biotechnology 26(7) (2008) 382-392.
Y.C. Wang, L.B. Wong, H. Mao, Creation of a long-lifespan ciliated epithelial tissue structure using a 3D collagen scaffold. Biomaterials 31(5) 848-853.
M. Martina, D.W. Hutmacher, Biodegradable polymers applied in tissue engineering research: a review. Polymer International 56(2) (2007) 145-157.
B.S. Kim, D.J. Mooney, Engineering smooth muscle tissue with a predefined structure. Journal of Biomedical Materials Research 41(2) (1998) 322-332.
B.S. Kim, D.J. Mooney, Development of biocompatible synthetic extracellular matrices for tissue engineering. Trends in Biotechnology 16(5) (1998) 224-230.
A.J. Engler, M.A. Griffin, S. Sen, C.G. Bonnetnann, H.L. Sweeney, D.E. Discher, Myotubes differentiate optimally on substrates with tissue-like stiffness: pathological implications for soft or stiff microenvironments. Journal of Cell Biology 166(6) (2004) 877-887.
L.Y. Koo, D.J. Irvine, A.M. Mayes, D.A. Lauffenburger, L.G. Griffith, Co-regulation of cell adhesion by nanoscale RGD organization and mechanical stimulus. Journal of Cell Science 115(7) (2002) 1423-1433.
R.E. Sallach, W.X. Cui, F. Balderrama, A.W. Martinez, J. Wen, C.A. Haller, J.V. Taylor, E.R. Wright, R.C. Long, E.L. Chaiko, Long-term biostability of self-assembling protein polymers in the absence of covalent crosslinking. Biomaterials 31(4) (2010) 779-791.
W. Shen, J.A. Kornfield, D.A. Tirrell, Structure and mechanical properties of artificial protein hydrogels assembled through aggregation of leucine zipper peptide domains. Soft Matter 3(1) (2007) 99-107.
J.S. Guo, K.K.G. Leung, H.X. Su, Q.J. Yuan, L. Wang, T.H. Chu, W.M. Zhang, J.K.S. Pu, G.K.P. Ng, W.M. Wong, X. Dai, W.T. Wu, Self-assembling peptide nanofiber scaffold promotes the reconstruction of acutely injured brain. Nanomedicine-Nanotechnology Biology and Medicine 5(3) (2009) 345-351.
W. Shen, R.G.H. Lammertink, J.K. Sakata, J.A. Kornfield, D.A. Tirrell, Assembly of an artificial protein hydrogel through leucine zipper aggregation and disulfide bond formation. Macromolecules 38(9) (2005) 3909-3916.
S.A. Maskarinec, D.A. Tirrell, Protein engineering approaches to biomaterials design. Current Opinion in Biotechnology 16(4) (2005) 422-426.
N. Sanabria-DeLong, A.J. Crosby, G.N. Tew, Photo-Cross-Linked PLA-PEO-PLA Hydrogels from Self-Assembled Physical Networks: Mechanical Properties and Influence of Assumed Constitutive Relationships. Biomacromolecules 9(10) (2008) 2784-2791.
J.A. Benton, C.A. DeForest, V. Vivekanandan, K.S. Anseth, Photocrosslinking of Gelatin Macromers to Synthesize Porous Hydrogels That Promote Valvular Interstitial Cell Function. Tissue Engineering Part A 15(11) (2009) 3221-3230.
K.J. Quinn, J.M. Courtney, J.H. Evans, J.D.S. Gaylor, W.H. Reid, Principles of burn dressing. Biomaterials 6(6) (1985) 369-377.
S.B. Lee, Y.H. Kim, M.S. Chong, S.H. Hong, Y.M. Lee, Study of gelatin-containing artificial skin V: fabrication of gelatin scaffolds using a salt-leaching method. Biomaterials 26(14) (2005) 1961-1968.
S.R. Hong, S.J. Lee, J.W. Shim, Y.S. Choi, Y.M. Lee, K.W. Song, M.H. Park, Y.S. Nam, S.I. Lee, Study on gelatin-containing artificial skin IV: a comparative study on the effect of antibiotic and EGF on cell proliferation during epidermal healing. Biomaterials 22(20) (2001) 2777-2783.
B. Balakrishnan, M. Mohanty, P.R. Umashankar, A. Jayakrishnan, Evaluation of an in situ forming hydrogel wound dressing based on oxidized alginate and gelatin. Biomaterials 26(32) (2005) 6335-6342.
A. Schneider, J.A. Garlick, C. Egles, Self-Assembling Peptide Nanofiber Scaffolds Accelerate Wound Healing. Plos One 3(1) (2008).
A.A. Martens, G. Portale, M.W.T. Werten, R.J. de Vries, G. Eggink, M.A.Cohen Stuart, F.A. de Wolf, Triblock Protein Copolymers Forming Supramolecular Nanotapes and pH-Responsive Gels. Macromolecules 42(4) (2009) 1002-1009.
S.G. Zhang, F. Gelain, X.J. Zhao, Designer self-assembling peptide nanofiber scaffolds for 3D tissue cell cultures. Seminars in Cancer Biology 15(5) (2005) 413-420.
F. Zhang, G.S. Shi, L.F. Ren, F.Q. Hu, S.L. Li, Z.J. Xie, Designer self-assembling peptide scaffold stimulates pre-osteoblast attachment, spreading and proliferation. Journal of Materials Science-Materials in Medicine 20(7) (2009) 1475-1481.
F. Gelain, D. Bottai, A. Vescovi, S.G. Zhang, Designer Self-Assembling Peptide Nanofiber Scaffolds for Adult Mouse Neural Stem Cell 3-Dimensional Cultures. Plos One 1(2) (2006).
Embrapa inova contra ’mal do Panamá (interview with Miguel Ángel Dita Rodriguez)
Zanatta, M. ; Dita Rodriguez, M.A. - \ 2010
Brazil : Valor Econômico
Pesquisadores do laboratório virtual da Empresa Brasileira de Pesquisa Agropecuária (Embrapa) na Holanda criaram um inédito método de diagnóstico de uma nova cepa do chamado "mal do Panamá", doença que tem devastado plantações de banana ao redor do mundo. Em parceria com uma equipe da Universidade de Wageningen, os pesquisadores brasileiros desenvolveram uma forma de abreviar a detecção do fungo batizado como "Tropical 4", mais agressivo do que qualquer organismo já conhecido. A nova tecnologia, descoberta pela Embrapa em menos de um ano, reduz de quatro meses para até seis horas o tempo de diagnóstico da doença, o que pode acelerar decisões sobre isolar áreas infectadas ou erradicar pomares. Além disso, será possível fazer os testes para detectar o DNA do fungo em tecidos de plantas, dispensando o complexo processo de isolamento do fungo em laboratório. A maior rapidez também deve reduzir custos e evitar prejuízos a produtores e indústrias processadoras. A metodologia criada pela Embrapa, que terá impacto positivo global, foi publicada neste mês na revista científica internacional "Plant Pathology". "Essa raça, que está na Ásia e Oceania, deve chegar logo à América Central e pode repetir a história dos anos 60", alerta o pesquisador cubano-brasileiro Miguel Ángel Dita Rodriguez, que ajudou a desenvolver o novo método no Labex Europa. ........ Os testes para validar o novo método foram todos conduzidos nos laboratórios da Embrapa em Wageningen. Como o Brasil é livre do novo fungo, é proibida a manipulação genética do chamado patógeno da doença em território nacional. No século passado, as plantações da fruta foram dizimadas por outras variantes do "mal do Panamá" em várias partes do globo. A doença, que destrói os pomares, foi descoberta na Austrália, em 1874. O fungo devastador tem esse nome justamente pela proporção dos estragos impostos aos plantios do Panamá na primeira metade do Século XX
Embrapa inova contra ’mal do Panamá’
Dita Rodriguez, M.A. - \ 2010
Pesquisadores do laboratório virtual da Empresa Brasileira de Pesquisa Agropecuária (Embrapa) na Holanda criaram um inédito método de diagnóstico de uma nova cepa do chamado "mal do Panamá", doença que tem devastado plantações de banana ao redor do mundo. Em parceria com uma equipe da Universidade de Wageningen, os pesquisadores brasileiros desenvolveram uma forma de abreviar a detecção do fungo batizado como "Tropical 4", mais agressivo do que qualquer organismo já conhecido. A nova tecnologia, descoberta pela Embrapa em menos de um ano, reduz de quatro meses para até seis horas o tempo de diagnóstico da doença, o que pode acelerar decisões sobre isolar áreas infectadas ou erradicar pomares. Além disso, será possível fazer os testes para detectar o DNA do fungo em tecidos de plantas, dispensando o complexo processo de isolamento do fungo em laboratório. A maior rapidez também deve reduzir custos e evitar prejuízos a produtores e indústrias processadoras. A metodologia criada pela Embrapa, que terá impacto positivo global, foi publicada neste mês na revista científica internacional "Plant Pathology". "Essa raça, que está na Ásia e Oceania, deve chegar logo à América Central e pode repetir a história dos anos 60", alerta o pesquisador cubano-brasileiro Miguel Ángel Dita Rodriguez, que ajudou a desenvolver o novo método no Labex EuropaPesquisadores do laboratório virtual da Empresa Brasileira de Pesquisa Agropecuária (Embrapa) na Holanda criaram um inédito método de diagnóstico de uma nova cepa do chamado "mal do Panamá", doença que tem devastado plantações de banana ao redor do mundo. Em parceria com uma equipe da Universidade de Wageningen, os pesquisadores brasileiros desenvolveram uma forma de abreviar a detecção do fungo batizado como "Tropical 4", mais agressivo do que qualquer organismo já conhecido. A nova tecnologia, descoberta pela Embrapa em menos de um ano, reduz de quatro meses para até seis horas o tempo de diagnóstico da doença, o que pode acelerar decisões sobre isolar áreas infectadas ou erradicar pomares. Além disso, será possível fazer os testes para detectar o DNA do fungo em tecidos de plantas, dispensando o complexo processo de isolamento do fungo em laboratório. A maior rapidez também deve reduzir custos e evitar prejuízos a produtores e indústrias processadoras. A metodologia criada pela Embrapa, que terá impacto positivo global, foi publicada neste mês na revista científica internacional "Plant Pathology". "Essa raça, que está na Ásia e Oceania, deve chegar logo à América Central e pode repetir a história dos anos 60", alerta o pesquisador cubano-brasileiro Miguel Ángel Dita Rodriguez, que ajudou a desenvolver o novo método no Labex Europa
Ferramentas participativas no trabalho com cultivos, variedades e sementes. Um guia para profissionais que trabalham com abordagens participativas no manejo da agrobiodiversidade, no melhoramento de cultivos e no desenvolvimento do setor de sementes
Boef, W.S. de; Thijssen, M.H. - \ 2007
Wageningen : Wageningen UR Centre for Development Innovation - ISBN 9789070785178 - 87
Sumário Nos nossos programas de treinamento sobre manejo local da agrobiodiversidade, melhoramento participativo de cultivos e apoio às fontes locais de semente, as ferramentas participativas ganham uma ampla atenção. Ferramentas são trabalhadas na teoria, são praticadas em situações em sala de aula, mas também são aplicadas em estudos a campo. Os objetivos da prática de ferramentas participativas no treinamento em manejo da agrobiodiversidade local relacionados aos objetivos deste guia são muitos. Entretanto, o presente livro-guia tem como objetivo-chave prover aos profissionais que trabalham no manejo de recursos genéticos, no melhoramento de cultivos e no contexto do desenvolvimento do setor de sementes, um conjunto diversificado de ferramentas desenvolvidas para o aprendizado e ação participativos que sejam adaptadas aos seus contextos específicos. Além desse objetivo principal, procuramos intensificar a criatividade e a flexibilidade desses profissionais no trabalho com grupos orientados de aprendizado e ação participativos, no diagnóstico, no planejamento e implementação de pesquisas, e no monitoramento e avaliação de projetos de agrobiodiversidade, de melhoramento de plantas e de sementes. Utilizamos o livro estruturado por Frans Geilfus1, o qual abrange 80 ferramentas para o desenvolvimento participativo, como uma importante base para esse guia de ferramentas. Uma seleção de ferramentas de Geilfus e de outros foram adaptadas em uma série de instrumentos participativos que podem dar apoio ao manejo da agrobiodiversidade, ao melhoramento de cultivos e ao desenvolvimento do setor de sementes. A estrutura deriva, basicamente, desse livro. Os exemplos e a seleção das ferramentas foram inspirados em experiências reais durante os cursos sobre melhoramento participativo de cultivos, sobre o desenvolvimento do setor de sementes, e manejo local da agrobiodiversidade, como organizado pela Wageningen International nos últimos 10 anos. Algumas outras ferramentas derivaram de outras fontes. As ferramentas foram testadas em projetos locais, por exemplo, no Brasil, Colômbia, Peru, Equador, Gana, Nigéria, Etiópia, Nepal, Índia e Irã. O guia foi desenhado de tal forma que é fácil de ser usado como uma referência no campo. A seqüência das ferramentas é similar àquela freqüentemente usada em análises participativas: inicia com ferramentas gerais, apresenta depois ferramentas mais detalhadas em tópicos específicos e culmina com ferramentas mais analíticas, que podem ser aplicadas às comunidades, mas que também podem auxiliar a equipe facilitadora na análise (depois do diagnóstico) das informações colhidas. Entretanto, quais ferramentas aplicar, que tipo (mapa, matriz ou qualquer outro), com quem, em que seqüência, depende muito do contexto e do objetivo do exercício. Por favor, não considerem este como um livro de receitas, mas sim como um conjunto de ferramentas que você pode utilizar. Consideramos o guia como uma inspiração para encorajar você a adaptar, compor e, desse modo, projetar suas próprias ferramentas.
Photochemical generation of highly destabilized vinyl cations: the effects of alpha- and beta-trifluoromethyl versus alpha- and beta-methyl substituents
Alem, K. van; Belder, G. ; Lodder, G. ; Zuilhof, H. - \ 2005
Journal of Organic Chemistry 70 (2005)1. - ISSN 0022-3263 - p. 179 - 190.
alkenyl(aryl)iodonium triflate fragmentations - hydrogen atom transfer - transition-states - iodonium salts - gas-phase - carbocations - solvolysis - halides - ion - rearrangements
The photochemical reactions in methanol of the vinylic halides 1-4, halostyrenes with a methyl or a trifluoromethyl substituent at the - or -position, have been investigated quantitatively. Next to E/Z isomerization, the reactions are formation of vinyl radicals, leading to reductive dehalogenation products, and formation of vinyl cations, leading to elimination, nucleophilic substitution, and rearrangement products. The vinyl cations are parts of tight ion pairs with halide as the counterion. The elimination products are the result of -proton loss from the primarily generated -CH3 and -CF3 vinyl cations, or from the -CH3 vinyl cation formed from the -CH3 vinyl cation via a 1,2-phenyl shift. The -CF3 vinyl cation reacts with methanol yielding nucleophilic substitution products, no migration of the phenyl ring producing the -CF3 vinyl cation occurs. The -CF3 vinyl cation, which is the most destabilized vinyl cation generated thus far, gives a 1,2-fluorine shift in competition with proton loss. The experimentally derived order of stabilization of the vinyl cations photogenerated in this study, -CF3 <-CF3 <-CH3 <-CH3, is corroborated by quantum chemical calculations, provided the effect of solvent is taken into account.
Fluorobenzo[a]pyrenes as probes of the mechanism of cytochrome P450-catalyzed oxygen transfer in aromatic oxygenations
Mulder, P.P.J. ; Devanesan, P. ; Alem, K. van; Lodder, G. ; Rogan, E.G. ; Cavalieri, E.L. - \ 2003
Free Radical Biology and Medicine 34 (2003)6. - ISSN 0891-5849 - p. 734 - 745.
one-electron oxidation - rat-liver microsomes - radical cations - horseradish-peroxidase - semiempirical methods - mouse skin - benzo<a>pyrene - dna - identification - metabolism
Fluoro substitution of benzo[a]pyrene (BP) has been very useful in determining the mechanism of cytochrome P450-catalyzed oxygen transfer in the formation of 6-hydroxyBP (6-OHBP) and its resulting BP 1,6-, 3,6-, and 6,12-diones. We report here the metabolism of 1-FBP and 3-FBP, and PM3 calculations of charge densities and bond orders in the neutral molecules and radical cations of BP, 1-FBP, 3-FBP, and 6-FBP, to determine the mechanism of oxygen transfer for the formation of BP metabolites. 1-FBP and 3-FBP were metabolized by rat liver microsomes. The products were analyzed by HPLC and identified by NMR. Formation of BP 1,6-dione and BP 3,6-dione from 1-FBP and 3-FBP, respectively, can only occur by removal of the fluoro ion from C-1 and C-3, respectively, via one-electron oxidation of the substrate. The combined metabolic and theoretical studies reveal the mechanism of oxygen transfer in the P450-catalyzed formation of BP metabolites. Initial abstraction of a ¿ electron from BP by the [Fe4+=O]+¿ of cytochrome P450 affords BP+¿. This is followed by oxygen transfer to the most electropositive carbon atoms, C-6, C-1, and C-3, with formation of 6-OHBP (and its quinones), 1-OHBP, and 3-OHBP, respectively, or the most electropositive 4,5-, 7,8-, and 9,10- double bonds, with formation of BP 4,5-, 7,8-, or 9,10-oxide.
Effects of growth conditions on external quality of cut chrysanthemum; analysis and simulation
Carvalho, S.M.P. - \ 2003
Wageningen University. Promotor(en): Olaf van Kooten, co-promotor(en): Ep Heuvelink. - [S.l.] : S.n. - ISBN 9789058088215 - 171
chrysanthemum - asteraceae - snijbloemen - groei - gewaskwaliteit - plantdichtheid - gewasproductie - simulatiemodellen - glastuinbouw - chrysanthemum - asteraceae - cut flowers - growth - crop quality - plant density - crop production - simulation models - greenhouse horticulture
For many years the emphasis in floricultural research laid with quantity rather than quality. Nowadays, since the prices are often determined on the basis of visual quality aspects, the so-called external quality, chrysanthemum growers aim to provide a high and constant product quality throughout the year. The external quality of cut chrysanthemum is usually evaluated in terms of stem and leaf morphology and flower characteristics. The priority within the external quality attributes depends on the particular market for the product.
Chrysanthemum cultivation is one of the most controlled and intensive crop production systems in horticulture. This quantitative short-day plant can only be cultivated year-round in greenhouses by controlling several growth conditions. However, many combinations of these conditions are possible, according to the growth strategy being employed. To produce year-round high quality chrysanthemum is a constant challenging problem for the grower, as the seasonal variations in daily light integral will produce large seasonal fluctuations in yield and quality. Therefore, in order to choose the optimal strategy adjusted to the planting week, it is necessary to know how the growth conditions influence plant quality. Thus, the factors involved in chrysanthemum external quality need to be carefully analysed and effectively combined to achieve the production of flowers with the maximum ornamental value year-round, while maintaining a high yield and an acceptable low energy input. Considering the complexity of cut chrysanthemum production, with its many options for control and its range of product quality attributes, management of such a system would be expected to highly benefit from the use of simulation models. For instance, the explanatory models are a valuable tool to integrate knowledge and to assist in the decision support. The development of such models for product quality is still a weak feature in crop modelling research, since priority has been given to simulation of productivity. To develop an explanatory model for the external quality of cut chrysanthemum, detailed knowledge about its growth and morphological development is needed.
The main aim of the present study was to quantify and understand the effects of the aboveground growth conditions on the external quality of cut chrysanthemum at harvest. Special attention has been paid to the integration of this knowledge and its incorporation into an explanatory model to predict the main external quality aspects of cut chrysanthemum. The focus was on the effects of the climate conditions (temperature, light intensity and CO 2 concentration) and cultivation practices (duration of the long-day period and plant density) on plant height (stem length), number of flowers and flower size.
Chapter 2 presents an overview of the growth conditions involved in the different chrysanthemum external quality aspects, and identifies the gaps in literature. A synthesis of the available models that have been built to predict some external quality attributes of chrysanthemum is also given.
The DIF concept states that internode length is dependent upon the DIFference between day (DT) and night (NT) temperature, and is independent of the mean 24 h temperature. This controversial proposition was investigated by means of an experiment described in Chapter 3.1. Chrysanthemum 'Reagan Improved' was grown in growth chambers at all 16 combinations of four DT and four NT (16, 20, 24 and 28 ºC) with a 12 h daylength. The length of internode 10, the number of internodes and the stem length were measured periodically. The experiment ended when internode 10 had reached its final length in all temperature combinations employed (27 days). A significant positive linear relationship between DIF and the length of the fully developed internodes was observed over the range of temperatures studied (16-28 ºC). It was also found that internode lengths recorded in early stages of development do not bear a close relationship to the final internode lengths, which explained contradictions in literature. In addition to being dependent on the developmental stage of the internodes, the effectiveness of DIF was related to the range of temperatures. It was shown that the DIF concept is valid only within a temperature range where the effects of DT and NT are equal in magnitude and opposite in sign (18-24 °C). Therefore, it was concluded that the response of internode length to temperature is strongly related to DIF, but this response is simply the result of independent and opposite effects of DT and NT. Internode appearance rate, as well as stem length formed during the experiment, showed an optimum response to DT.
Chapter 3.2 is the description of an in depth study on the sensitivity of several flower characteristics to temperature, with the aim of obtaining a better understanding of the underlying physiological processes of flower initiation and development. An attempt has been made to analyse the effects of temperature, applied at different phases of the cultivation period, on each of the studied flower characteristic. Plants were grown in glasshouse compartments at two constant temperatures (17 and 21 °C), and in growth chambers at 32 temperature combinations (from 15 to 24 °C). In the growth chamber experiment the temperature treatments were based upon a division of the cultivation period into three consecutive phases: from planting until the end of the long-day (LD) period (phase I; 18 and 24 °C), from the start of the short-day (SD) period until the visible terminal flower bud (phase II; 15, 18, 21 and 24 °C), and finally from the visible terminal flower bud until harvest (phase III; 15, 18, 21 and 24 °C). Of the characteristics investigated only the flower position within the plant was independent of temperature. The number of flowers and flower buds per plant (NoF), individual flower size and colour (pink) were strongly affected by temperature. It is shown that the temperature effect was largely dependent on the cultivation phase and on the flower characteristic itself. In general, flower characteristics were less influenced by temperature applied during the LD period, compared to the SD period. A higher temperature increased NoF, mainly by increasing the number of flower buds. NoF was affected positively by temperature mainly during phase III, whereas individual flower size increased with temperature during phase II but decreased with temperature during phase III. Lower temperatures during phase III significantly enhanced flower colour intensity. It was concluded that it is not possible to ascribe to each phase of the cultivation a common optimum temperature for all the flower quality aspects. Hence, to define the most suitable temperature in each cultivation phase it is necessary to decide which quality attribute is to be maximised.
The effects of the assimilate availability on the NoF, individual flower size and plant height is described in Chapter 4.1. Seven greenhouse experiments were conducted in different seasons using the cultivar 'Reagan Improved' (spray type). One extra experiment was carried out to extend this study to two other cultivars ('Goldy' and 'Lupo': 'santini' type), focusing on their response to plant density. Assimilate availability, measured as total plant dry mass (TDM, g plant -1), increased with higher light intensity, higher CO 2 concentration, lower plant density or longer duration of the LD period. In contrast, variation in the growth conditions produced hardly any effect on flower mass ratio (FMR), and only an increased duration of the LD period had a negative linear effect on the partitioning towards the flowers. The season also had an effect on chrysanthemum FMR: when planted in September (lowest light levels during the SD period), FMR was reduced compared to the other seasons. It is concluded that within a wide range of growth conditions chrysanthemum invests the additional assimilates, diverted to the generative organs, in increasing NoF rather than in increasing flower size. Individual flower size was only affected by assimilate availability when average daily incident photosynthetically active radiation during the SD period was lower than 7.5 mol m -2d -1, resulting in lighter and smaller flowers. When incident photosynthetically active radiation (PAR) during the SD period was higher than this threshold value, a constant flower size was observed for the fully open flowers (0.21 ± 0.10 g plant -1and 25 ± 2 cm 2plant -1). Excluding the positive linear effect of the duration of LD period, assimilate availability had no relevant influence on plant height (< 10 % increase). Irrespective of the growth conditions and season, a positive linear relationship between NoF and TDM was observed (NoF = 1.938TDM - 2.34; R 2= 0.90). The parameters of this relationship are cultivar-specific. The generic nature of these results is discussed is this chapter. The functional relationships developed for predicting NoF and flower size were incorporated as 'modules' in a photosynthesis-driven growth model for cut chrysanthemum (Chapter 5.2).
The influence of assimilate availability on flower size can also be tested by manipulating sink-source ratio. This allows the estimation of the potential flower size, which is defined as the size reached under conditions of non-limiting assimilate availability. In Chapter 4.2 sink-source ratio was manipulated by flower bud removal (leaving one, two or four flowers and a control), by the presence or absence of axillary shoots, and by varying the light intensity. To investigate whether flower size is dependent on flower position within the stem, the apical terminal flower, the apical lateral flowers (from first order axillary shoots) and the first flower locate in a second order axillary shoot were compared. The results indicated that in treatments where a limit on number of flowers was imposed, individual flower dry mass and area increased significantly under conditions of lower competition for assimilates (for example, by decreasing sink-source ratio by either leaving fewer flowers per plant, removing axillary shoots or using supplementary assimilation light). The effect of flower position on flower size, in both the disbudded and control plants, was found to be only important when comparing flowers located on the first order axillary shoots with flowers on the second order axillary shoots, the latter being 40 % smaller than the former. Monoflower plants without side shoots represented the potential flower size, and their flower was up to 2.4 times as heavier and 76 % as larger in area as the control flower in 'Reagan Improved'. The 'santini' cultivars also produced their maximum flower size on the monoflower plants, but the increase in size relative to the control plants was cultivar specific. Higher leaf starch content and lower specific leaf area (thicker leaves) were observed in the monoflower treatments, reflecting an abundance of assimilates. Plant dry mass was only reduced at the lowest sink strength treatment (monoflower plants without axillary side shoots), whereas FMR showed a saturation response to the number of flowers per plant with a maximum value of 0.22.
The data obtained in the previous chapters were further explored to model and validate some external quality attributes. In Chapter 5.1 a process-based model was developed to describe internode elongation in time as a function of temperature. This model was calibrated with the data from Chapter 3.1, and it was built based on three plausible physiological processes occurring in chrysanthemum elongation: (1) the accumulation of elongation requirements during the day, (2) elongation during the night using the accumulated elongation requirements, and (3) the limitation of the internode length due to low turgor pressure unable to counteract cell wall elasticity. Simulated and measured internode length showed a good agreement ( R 2= 0.91). The presented model may be extended to include variable light conditions and other plant species that show elongation control by DIF.
In Chapter 5.2 a case study is presented on the interactive effects of duration of the LD period (2, 9 and 16 days) and plant density (48, 64 and 80 plants m -2) on several external quality aspects. An existing photosynthesis-driven crop growth model for cut chrysanthemum (Lee et al. , 2002) was validated and used to simulate total dry mass for the nine treatments. The possibility of a trade-off between the cultivation measures was analysed, while aiming to maintain a certain quality at harvest. It was concluded that a similar total plant fresh mass could be obtained using several combinations of plant density and number of LDs without affecting either NoF or individual flower size. This trade-off is, however, very dependent on the planting date of the crop, which emphasises the need for a crop simulation model as a decision support tool. Furthermore, special attention should be paid to plant height when choosing a combination of the cultivation measures studied, since this is strongly and positively influenced by the duration of the LD period. The modules developed in Chapter 4.1, for number of flowers and flower size were validated and the measured values were accurately predicted.
The main achievements and limitations of this study are discussed in Chapter 6, and suggestions for future research are presented.
CSCLearning? : participation, learning, activities and knowledge construction in computer-supported collaborative learning in higher education
Veldhuis-Diermanse, A.E. - \ 2002
Wageningen University. Promotor(en): M. Mulder; P.R.J. Simons. - S.l. : S.n. - ISBN 9789058086181 - 228
leren - participatie - kennis - constructie - computers - lesmaterialen - hoger onderwijs - informatietechnologie - communicatie - learning - participation - knowledge - construction - computers - teaching materials - information technology - communication - higher education
Background of the research
Recent developments in Information and Communication Technology (ICT) offer many opportunities to reorganise education according to constructivist principles. In contrast to more traditional education, education organised by constructivist principles is not teacher-centred, but student-centred. Students can influence their education and are not only consumers as in traditional education. Students work in collaboration to solve tasks and importance is attached to their own ideas; reproducing facts is becoming less important. Students are expected to be active and independent. They have to search for information by themselves and are expected to process this information critically. The accent is not on testing reproduction of facts but much importance is attached to creating own ideas and theories. Chapter 2 of this PHdissertation outlines the theoretical framework which was based on constructivism and which determined the design and conduction of the research.
The assumption is that supporting education by ICT can increase the quality of learning. This PHdissertation studies one specific ICT-application, namely Computer-Supported Collaborative Learning (CSCL). In CSCL, students learn collaborative by using a CSCL-system. A CSCL-system can be considered to be a discussion forum in which students can contribute messages and can read each other's messages. A computer network connects students and therefore, students can read all messages and react to all the messages contributed to the discussion forum. Synchronous as well as asynchronous systems are available. In synchronous systems, students can work from different places in real time. In asynchronous systems, work is independent of time and place. In the research described in this PHdissertation, only asynchronous systems are used. Students could work in the system at any moment.
The central idea of CSCL is that it supports shared knowledge building by the learners (Scardamalia & Bereiter, 1994). The principles of shared knowledge building and CSCL are consistent with a constructivist view of learning. From a constructivist point of view, learning is a dynamic process of knowledge construction. In this PHdissertation, collaborative learning is described as a learning situation in which participating learners exchange ideas, experiences and information to negotiate about knowledge in order to construct personal knowledge that serves as a basis for common understanding and a collective solution to a problem. Research shows that collaborative learning can be useful to reach intellectual goals such as critical thinking or debating. People learn by interaction (Erkens, 1997; Gokhale, 1998; Kanselaar & Van der Linden, 1984; Lethinen, Hakkarainen, Lipponen, Rahikainen & Muukkonen, 2001; Newman, Johnson, Webb & Cochrane, 1999). Characteristic to collaboration is the interaction between people and people learn through interaction with each other (Biggs & Collis, 1982). Discussion is important because we will only 'give words to our thoughts' when we use these words to communicate with others, and this in turn may be related to our ability to clarify and remember ideas (Johnston, 1997); understanding is achieved through interaction (Veerman, 2000). Besides, CSCL seems to be an effective tool because students have to write down their ideas. Writing can be seen as the most important tool of thinking, and it has a crucial significance in explication and articulation of one's conceptions (Bereiter & Scardamalia, 1987; Rijlaarsdam & Couzijn, 2000; Tynjälä, 1999).
Literature shows that there is a reasonable amount of published experiments indicating positive learning effects when CSCL-systems have been used in education (De Laat & De Jong, 2001; Koschmann, Feltovich, Myers & Barrows, 1997; Lethinen et al ., 2001; Lipponen, 1999; Salovaara, 1999; Tynjäla, 1999). Despite developments in research and educational practice, much is still unclear about students' learning processes in CSCL. It is unknown how students use a CSCL-system, which learning activities they use and how CSCL supports students' learning. The aim of this research is to gain insight into students' learning processes in CSCL, focused on both the amount and the quality of knowledge construction. The underlying assumption is that understanding students' learning processes will be helpful to use CSCL effectively in education. Inspired by this research problem the following main research questions were addressed ( chapter 1 ):
1) How can students' learning processes in an asynchronous CSCL-system be characterised in terms of participation and interaction?
2) How can students' learning processes in an asynchronous CSCL-system be characterised in terms of cognitive, affective and metacognitive learning activities?
3) Do students construct knowledge and what is the quality of that knowledge constructed by students in an asynchronous CSCL-system?
4) What are the effects of moderating a CSCL-discussion on students' learning?
To find an answer to our research questions, first a review study ( chapter 3 ) was carried out to find out if a method was available to analyse students' activities in a CSCL-system on participation, interaction, types of learning activities and amount and quality of knowledge construction. The reviewed methods supplied many ideas we could use to develop a new method and helped us to clarify our view on analysing CSCL-data. However, studying a number of methods did not result in finding a workable, ready-made method to answer our research questions. Therefore, a new method was developed on the basis of the theoretical framework outlined in chapter 2, on the ideas supplied by the reviewed methods described in chapter 3, and on our experiences with CSCL in pilot projects. Chapter 4 describes the developed method used to analyse students' learning processes in this PHdissertation. The method consist of three steps (see Figure I): 186
Figure I: Three steps of the method used to analyse students' contributions in a CSCL-system.
The method consists of three steps: (1) Analysing students' participation and interaction, (2) analysing cognitive, affective, and metacognitive learning activities, and (3) assessing the amount and quality of knowledge constructed by students and expressed in written contributions. Students' participation was operationalised as the number of written notes (new notes or build-on notes) and number of different read notes. To indicate interaction, density was calculated twice, based on read notes as well as on linked notes. Density describes the general level of linkage among the students in a discourse. In other words, density refers to the extent of interaction between students.
The second step of the method concerns the use of learning activities. The classification of learning activities of Vermunt (1992) was used as a frame to create a coding scheme divided into (1) Cognitive, (2) affective, and (3) metacognitive learning activities. Next, these main categories were divided into several subcategories. The main category 'cognitive learning activities' consists of three subcategories: (a) Debating, (b) using external information and experiences, and (c) linking or repeating internal information. Debating refers to the process of negotiation, critical thinking, asking questions and discussing subjects with other participants in the database. Using external information and experiences was inserted into the scheme because in an asynchronous CSCL-system students have time to search for information to support their ideas with explanations and to elaborate their questions. Information can be used to evaluate contributions thoroughly. Types of information contributed to the CSCL-system are for example articles found on the Internet, notes made in a lecture, a summary of a book chapter, results of running a specific tool, or a summary of another discussion. A third subcategory is Linking or repeating internal information . Internal information concerns information found in the discussion view students are working in. Referring to and linking notes were considered to be important because of increasing coherence in the database. It was assumed that more coherence between notes means more interactions between students. By 'affective learning activities', students' feelings expressed in their notes while working in the learning environment are meant. An affective category was included in the coding scheme to provide information about the kinds of feelings and was expected to be useful in interpreting the nature of the interactions between students. In this coding scheme affective learning activities are not related to content of subject matter, they are non-task-related. The category 'metacognitive learning activities' consists of three subcategories: (a) Planning, (b) keeping clarity, and (c) monitoring. Planning refers to practical issues such as making appointments, subdividing parts of the task, appointing a group member as chairperson or to theoretical issues such as choosing a definition after discussing a concept or deciding to run a specific tool. Characteristic of these content-related approaches is their effect on the process of the task performance. The subcategory Keeping Clarity refers to messages written in order to keep the structure and the content of the notes clear. The last subcategory of metacognitive learning activities is called Monitoring . While conducting the task, students will keep watching the learning process. Next, a number of codes are distinguished within all the subcategories.
Because the third main research question could not be answered by means of the first coding scheme, it was necessary to add a step to the process of analysis, step three. Therefore, knowledge construction was first operationalised as adding, elaborating and evaluating ideas, summarising or evaluating external information and linking different facts and ideas. In line with this definition, six codes from the first coding scheme were selected to indicate the amount of constructed knowledge. To measure the quality, a second coding scheme was developed on the basis of the Structure of the Observed Learning Outcome (SOLO) taxonomy of Biggs and Collis (1982). This scheme consists of four levels of quality, increasing from level D to level A. Both coding schemes were validated by calculating Cohen's Kappa to determine the inter-rater reliability of the scheme. The first coding scheme (step 2) was applied to units of meaning. In other words, several types of learning activities could be decoded within one message. The second coding scheme (step 3) was applied to complete messages; a contribution was assessed in its entirety. The coding schemes were developed to understand students' learning processes. Standards were formulated to judge students' learning processes and to compare results of different studies.
From 1998 until 2001, CSCL was implemented in six university courses. Three studies were conducted at the Wageningen University, two studies at the University of Nijmegen and in one study we made use of data collected in a course organised at the University of Toronto. All data were analysed by means of the method developed. Besides the similarity of a university context, the six studies were comparable in some other aspects. All studies took place as part of a real course in which students had to work collaboratively on complex tasks by the use of a CSCL-system. All studies were planned in the final phase of the educational programmes. Another similarity was the CSCL-system used, namely Knowledge Forum. In none of the studies, students were charged with rules concerning the use of Knowledge Forum. They were expected to log in regularly, but were not obliged to read all notes or to write a certain numbers of notes. However, there were differences between the studies as well.
Sometimes, the course was required; sometimes it concerned an optional course. The period of the use of CSCL varied substantially (2 to 17 weeks), just as the number of hours students were expected to spend in the CSCL-system weekly (2 to 20 hours). Besides, courses differed in their testing of learning. Sometimes, the participation in Knowledge Forum was assessed, but sometimes only a final test determined the course's grade. The discussions analysed in studies 1, 2, 3, and 4 ( chapter 5) differ from the discussions analysed in studies 5 and 6 (chapter 6 ) in being not moderated; students in studies 1-4 were self-regulated. In studies 5 and 6, a teacher was active in writing contributions focused on stimulating collaboration between students or triggering critical thinking. Beforehand, teachers were instructed on how to moderate half of the students in their courses. Guidelines were discussed and notes were available to try out the guidelines.
Each study of chapter 5 first answers the main research questions. Besides these overall research questions, three more specific questions were formulated related to specific characteristics of the different educational tasks and settings in studies 1, 2 and 3. The sub-question in study 1 concerns group size, the sub-question in study 2 concerns having a specific discussion role and the sub-question in study 3 concerns students' learning style. Chapter 6 answers the fourth main research question. Comparable to the studies 1-4, first students' learning processes were analysed. Additionally, moderators' activities were analysed to survey how the moderation was carried out. Moderators' activities were analysed on types of actions, percentage of read notes, number of written notes, the percentage of students to whom the moderator directed notes, response time and relation between number of students' and moderator's notes contributed per week.
Table A shows the main results of the six studies: mean participation per student, density of interaction, mean number of used learning activities per student and knowledge construction on average per group.
Table A: Mean participation per student, density of interaction, mean number of used learning activities per student and knowledge construction on average per group in the six studies
* In this study interaction was not calculated because a teacher intervened in one half of the group, and besides, interaction was not part of the research question in this study.
It is striking that the results of the different studies vary enormously. The only similarity between the studies is that in each study, students read many more notes (passive participation) than they wrote (active participation). Concerning the use of cognitive, affective and metacognitive learning activities we see large differences. Except for study 6, in each study students used affective learning activities least. Next, students in the studies 2, 3, 4 and 6 used more cognitive than metacognitive learning activities. In the studies 1 and 5, it was just the other way around. In four of the six studies, students constructed little knowledge, in the remainder of the studies, a reasonable amount of knowledge was constructed. The quality of the knowledge constructed varied between low and high, but in most of the studies, quality of knowledge was assessed to be reasonable.
Based on the results, it was not possible to create a pattern of the way in which students learn in a CSCL-system. Comparable to more traditional settings, students learned on their own way. In study 3, students were asked to fill in a part of the Inventory of Learning Styles (Vermunt, 1992) to search for a possible relationship between students' learning style on the one hand and students' learning processes in a CSCL-system on the other hand. No correlations were found between students' learning style and their participation. Between students' learning style and their learning activities a few significant correlations were found. To keep the discussion clear and to monitor the task, it seems to be good to have students with an application-directed learning style or meaning-directed learning style in the group. However, because of the lack of explicit learning styles of most students, a Pearson correlation test was also executed on the level of scales. There, some interesting correlations were found. It will bear fruit to stimulate a positive attitude towards collaborative learning. Another correlation consisted between the scores on the scale deep cognitive processing and amount of knowledge construction. Students who scored high on the scale deep cognitive processing constructed more knowledge than students who scored low on this scale. Additionally, students who lack regulating strategies will have some problems in working with CSCL.
Among others, Webb and Sullivan Palinscar (1996) and Dillenbourg (1999) wrote about the complexity of educational contexts. They argued that because of the multiple interactions between factors such as group size and task characteristics it is very difficult to set up initial conditions that guarantee the effectiveness of collaborative leaning. This research also confirmed the complexity of factors in setting up a successful course. As mentioned above, we deepened one factor in our research, namely moderating discussions. The results gave reason to assume that students that are moderated critically construct on average more and qualitative better knowledge than self-regulated students. Critical moderation was among other things concretised by asking questions, checking answers and contributing statements. Critical moderation triggered students to deep learning which means that students interact critically with the learning content, relate contributions within the discussion or to information found in other sources, use organising principles to integrate ideas and examine the logic of the arguments used (MacFarlane Report, 1992). Although an effect was found for the use of cognitive learning activities and knowledge construction, the results indicate that the quality of teacher interventions determines the success. It is difficult to instruct teachers to moderate asynchronous discussions in the short term. Although guidelines can be given and trained, teachers must become familiar with CSCL and moderating discussions. Factors such as using the right tone, moderating according to a personal way of teaching, real involvement in the course and pleasant contact with students are of importance to let moderation succeed.
Another factor that was manipulated is solving a problem from a certain perspective. In study 2, students conducted two tasks. In contrast to the first task, students working on the second task played a specific role (for example economist, tourist or farmer); students worked in a multidisciplinary team. Having other information and contradictory interests stimulated active as well as passive participation, which resulted in more knowledge construction.
Chapter 7summarises our most important findings and discusses the results from both theoretical and methodological perspectives. Among other things, comments are given on the definition of knowledge construction used, the standards used to assess the amount and quality of knowledge construction and the number of students participating in the studied courses.
In our opinion, the developed method was useful to analyse students' learning processes in a CSCL-system. In other words, the executed analyses increased our insight into students' learning processes and helped us to survey the activities students use when taught by CSCL. Beforehand, we assumed that CSCL could be useful to support students' learning processes, especially in higher education. After carrying out the research, we still believe CSCL offers opportunities to support the process of knowledge construction. However, we think it is getting time to consider how to increase the amount and quality of knowledge construction by students in a CSCL-system. Students do not make optimal use of the opportunities. It is true that students constructed knowledge, but the amount of knowledge often was not large and the quality of the constructed knowledge left to be desired. Thus, we can conclude that CSCL can lead to learning indeed. However, we do not have to expect miracles from CSCL. Besides teacher interventions and working in multidisciplinary teams, the following aspects appear to be of importance for the use of CSCL in education successfully:
As with other educational appliances, the use of CSCL must very thoroughly be considered. Although this remark seems to be obvious, it is an important finding. Nowadays, people often think too easily that the use of ICT stimulates students to learn and that this learning automatically results in positive results. Our research shows that this assumption is far from the reality. When considering the use of CSCL in education, examples of important questions are: What is the aim of the educational course? Which task is needed to reach the aim and is that task appropriate to work on in a CSCL-system? Is it desirable that students learn collaboratively in this course? To what extent does the task have to be prestructured? Are the subject matter and the task useful to negotiate about knowledge? Do students have experience with CSCL and if not, can we train them in a short term? Is a user-friendly CSCL-system available? Are enough computers available? Do we prefer students working from a distance or working in one room? How much time do we expect students to work on the task? Do we assess students' participation in the CSCL-system and/or the content of the contributions? Do we charge students with rules? Finally, it is important to consider whether moderation is desirable, and if so, how to moderate discussions; what is your aim with moderating a discussion? Do you want to stimulate students to participate or do you want to increase their critical thinking and knowledge construction?
Besides conclusions and practical implications to use CSCL effectively, chapter 7 gives suggestions for future research. A line of research suggested is systematically analysing the relation between the conditions of using a CSCL-system and the depth of learning. By conditions one can think for example of additional or integral use of the CSCL-system, different types of CSCL-systems, and different types of tasks. Besides, it would be interesting to analyse the extent of transfer of acquired knowledge and skills compared to similar task situations. Another interest is analysing participation, interaction, use of learning activities and knowledge construction during the course as well. Therefore, the instrument developed and used in this PHdissertation can be used in further research again, elaborated or not. When repeating the research, attention must be paid to the used standard to check whether students construct little knowledge in other settings, too. If so, the core of further research must concern the question of how to increase knowledge construction in CSCL. It would be wise to involve students' experiences more intensively.
Delocalization does not always stabilize : a quantum chemical analysis of -substituent effects on 54 alkyl and vinyl cations
Alem, K. van; Lodder, G. ; Zuilhof, H. - \ 2002
The Journal of Physical Chemistry Part A: Molecules, Spectroscopy, Kinetics, Environment, & General Theory 106 (2002)44. - ISSN 1089-5639 - p. 10681 - 10690.
The effects of -substituents on alkyl and vinyl cations are studied using high-level ab initio calculations. The geometries, stabilities, and electronic properties of 27 alkyl cations and 27 vinyl cations with -substituents are computed at the B3LYP/6-311 G(d,p), MP2/6-311 G(d,p), and CBS-Q levels. The substituents studied vary from strongly destabilizing (e.g., -CN and -CF3) to strongly stabilizing (e.g., -OSi(CH3)3 and -NH2). The calculations show that in the case of vinyl cations the stabilization provided by the -substituents is larger by an average value of 4 kcal/mol than for the alkyl ones. This is the result of the intrinsically lower stability of vinyl cations (on average 17 kcal/mol). However, strong inductively donating or withdrawing -substituents show different behavior. Because of the high amount of s character in the carbon--substituent bond in the vinyl cations (sp hybridized), more pronounced effects are found than in the corresponding alkyl cations, leading to lower stabilization for inductively withdrawing -substituents and higher stabilization for inductively donating ones. Thus, distinct effects of -substituents on the stabilization of the cations are observed. However, no correlation is found between NBO-computed charge increases or bond-order increases at either the carbocationic center or at the -substituent of the molecule and the stability provided by an -substituent. This demonstrates the conceptual difference between stabilizing and electron-donating effects. Only for the C-H hyperconjugative effect in the vinyl systems is a correlation with the computed reaction enthalpies observed. Finally, the effect of leaving-group variation is studied. Changing the leaving group from H to Cl yields geminal effects ranging from 7 kcal/mol destabilization to 9 kcal/mol stabilization of the neutral precursor.
Significance of combined nutritional and morphological precaecal parameters for feed evaluations in non-ruminants
Leeuwen, P. van - \ 2002
Wageningen University. Promotor(en): M.W.A. Verstegen; J.M.V.M. Mouwen. - S.l. : S.n. - ISBN 9789058086426 - 151
niet-herkauwers - spijsvertering - spijsverteringsstelsel - voederwaardering - voedingsfysiologie - morfologie - voer - samenstelling - darmslijmvlies - nonruminants - digestion - digestive system - feed evaluation - nutrition physiology - morphology - feeds - composition - intestinal mucosa
In this thesis the hypothesis is tested that the nutritional evaluation of dietary formulations in non-ruminants requires both functional-nutritional and functional-morphological parameters. The functional-nutritional parameters provide data on the outcome of the digestive process. Additionally, the functional-morphological parameters provide information about the effects of feed components on the small intestinal mucosa.
Part I (chapters 2 - 4) considers the apparent digestibility as a functional-nutritional parameter for feed evaluation in pigs and roosters, whereas Part II (chapters 5 - 8) presents studies with functional-morphological parameters of the small intestinal mucosa of chickens, calves and piglets in relation to feed composition and additives.
FUNCTIONAL-NUTRITIONAL PARAMETERS (PART I)
The amount of protein and amino acids, which disappears in the large intestine of pigs, is not available for animal body maintenance and production (Zebrowska, et al ., 1978). Degradation of protein in the large intestines is mainly fermentative resulting in non-amino acid N end products, which are not available to the animal. This finding implies that precaecal digestion rather than whole tract digestion provides a more accurate parameter for the estimation of protein availability (Dierick et al., 1987). The in vivo determination of precaecal protein digestion relies on quantifying the ratio between the amount of the ingested protein to that which disappears proximal to the caecum. In digestibility experiments the diets and digesta, collected immediately after the ileum, are analysed on their protein contents. But digesta also contain undigested dietary protein of endogenous origin. Therefore, this ratio is determined as the apparent digestibility. Apparent digestibility is a quantitative parameter providing information on the digestive progress measured by nutrient disappearance at a defined site.
Quantitative studies concerning the digestive processes in the small intestine require reproducible collection of digesta from the small intestine. Present procedures can be divided into techniques by which digesta are collected after sacrifying the animals and techniques based on a surgical intervention. Collection of digesta from animals after euthanasia is often used in experiments with broilers (Ravindran et al., 1999). This method, however, requires a large number of animals and for this reason is not commonly used in pigs. There are different surgical techniques described in literature for precaecal digesta collection. It is generally concluded that flexible (silicone) rubber is preferable to rigid materials. Regarding surgical techniques for intestinal studies in pigs, there is a consensus that simple T-shaped cannulae in the ileum and ileo-rectal anastomose (IRA) may not provide representative samples of digesta and/or may interfere with the animal's physiology (Köhler, 1992), whereas collection of digesta from re-entrant cannulae is considered to be hampered by technical difficulties (van Leeuwen et al ., 1987).
In part I of the thesis surgical techniques and procedures for digesta collection in pigs and roosters are described and results of digestibility determinations are given.
Chapter 1 describes a surgical procedure, which is called the Post Valve T-Caecum (PVTC) cannulation and is considered to be an alternative to the existing digesta collection methods. The prerequisites of this technique are that there is minimal hinder of the animal's physiology. Moreover, digesta samples should be representative, and the surgical technique acceptable in terms of animal welfare. The PVTC technique relies on partial caecectomy followed by placement of a wide flexible silicone T-cannula in the caecum. A considerable advantage of this technique is that the region of the intestine to be studied is not surgically treated. Gargallo and Zimmerman (1981) studied the possible effects of caecectomy on digestion in pigs. They observed small effects on overall digestibility of cellulose and nitrogen. Their final conclusion was that the absence of the caecum in pigs did not significantly alter digestive function. Darragh and Hodgkinson (2000) commented that the PVTC cannulation procedure appears to be the preferred method for the collection of ileal digesta.
Chapter 2 describes digesta collection procedures and implications when using PVTC cannulated pigs. Collection of digesta after PVTC cannulation necessitates the use of an inert marker in the diets, to quantify the amounts of nutrients present in ileal digesta for determination of diet digestibility. Two experiments were conducted to evaluate chromic oxide (Cr 2 O 3 ) and HCl-insoluble ash as digestive markers by determining the apparent digestibility of dry matter (DM) and crude protein (CP). In addition, studies were performed of the effects of age (i.e. three different body weight (BW) classes) on apparent ileal DM and CP digestibilities. In experiment 1, barrows were fitted with PVTC cannulae to determine apparent ileal DM and CP digestibility of a wheat gluten/wheat bran ration and a soybean meal ration. Immediately after the morning feeding ileal digesta were collected on an hourly basis for a period of 12 hours. Subsequently, nitrogen (N) and marker contents were determined in these samples. The postprandial Cr/N ratio was more constant than the HCl-insoluble ash/N ratio. Therefore, chromic oxide is considered more suitable as a marker than HCl-insoluble ash when apparent digestibility of protein is the parameter to be studied. In experiment 2, apparent ileal DM and CP digestibilities were determined in 18 rations using twelve barrows fitted with PVTC cannulas (BW from 40 - 100 kg). The protein sources for these rations were derived from feedstuffs of different origin. Apparent precaecal digestibility differed significantly (P < 0.05) on the marker in four rations for DM and in three rations for CP. Digestibility coefficients were not systematically higher or lower for either marker. Besides these methodological aspects, a slight increase in apparent ileal CP digestibility was observed with an increase in body weight.
Chapter 3 examines precaecal digestion of protein and amino acids (AA) in roosters. Similar to pigs, undigested AA which reach the caeca are deaminated by the microflora and the end- products have no nutritional value (McNab, 1989). Moreover, Parsons (1986) observed a closer relationship between amino acid availability measured in chick growth assays, and digestibility determined in caecectomised rather than in intact birds. This means that, in poultry, digestion in the distal region of the intestines, more specifically the caeca, is mainly fermentative and that the AA synthesized in, or disappearing from the caeca, are not available for protein synthesis by the animal. Therefore, a procedure for ileostomy in adult roosters has been described with the use of flexible silicon cannulae. Apparent ileal digestibility coefficients for dry matter (aDC DM), crude protein (aDC CP) and amino acids (aDC AA) were determined in diets formulated with maize/wheat gluten meal, wheat gluten meal, faba beans, lupins, soybean meal and casein as the main protein sources. These determinations were performed in ileostomised roosters fitted with silicon cannulae. In addition, aDC data determined using roosters (present study) were correlated with previously published aDC data of the same diets determined with pigs (van Leeuwen et al., 1996a, 1996b).
The ileal aDC CP in roosters significantly (P < 0.05) differed in aDC CP and aDC AA between diets. Over diets significant linear relationships were found for the digestibility data determined with roosters and pigs and inturn explained 85 % of the variation in ileal aDC CP between the six diets evaluated in roosters and pigs. Variation between roosters and pigs in ileal aDC AA could be explained for 62-90%, for the individual amino acids, with the exception of aDC of arginine. The standard errors of prediction of the models for aDC AA in roosters using aDC AA in pigs were < 0.04 percentage units. Although, more work is needed to validate these correlations, it is likely that this approach can be used for the prediction of aDC values for roosters from values determined in pigs. The results showed a similarity in the level of digestibility coefficients for protein and amino acids in both species. This means that, despite the differences in anatomy between pigs and poultry (Moran Jr., 1982) the differences in apparent precaecal digestibility of CP and AA were limited. The two animal species with their differences in intestinal structures, differences in amounts and activity of the endogenous components were both capable of digesting protein to a similar extent suggesting a similar precaecal digestive capacity.
Regarding methodological aspects the study showed comparable aDC CP and AA for soybean meal determined in the present experiment with the cannulated roosters and data from literature using adult caecectomised roosters. Secondly, the roosters provided with cannulae introduced after ileostomy can be used for periods up to a year after surgery.
FUNCTIONAL-MORPHOLOGICAL PARAMETERS (PART II)
The qualitative functional-morphological parameters of the small intestinal mucosa are examined in the chapters 5 - 8.
Chapter 5 considers the morphology of the mucosal surface of the small intestine of broilers and the relationship with age, diet formulation, small intestinal microflora and growth performance. The villi of the small intestine were examined with a dissecting microscope and the surface was described using a morphological scoring scale. As illustrated by pictures, zigzag oriented ridges were observed in the broilers, which seem to be characteristic for poultry.
The results showed that in clinically healthy broilers the shape and orientation of the small intestine villi were related to the age of the animal and the intestinal location. Effects of dietary composition and microflora are also demonstrated. Fermentable pectin as dietary component decreased the zigzag villus orientation and reduced performance. Addition of glutamin to a soybean diet limited the decrease of the zigzag villus-orientation caused by pectin and had a beneficial effect on performance. An oral challenge with a non-virulent Salmonella typhimurium increased the effects of dietary pectin on the small intestine morphology and performance.
Chapter 6, contains a study of the functional-morphological effects of virginiamycin (VM), used as feed additive in piglets. The objective of this study was to determine the effects of VM on morphological parameters of the small intestinal mucosa, animal growth and feed conversion ratio (feed intake/weight gain) in piglets. The study comprised three trials: two experiments to study the morphological effects of VM on the small intestinal mucosa, whereas the third experiment was a performance study. Each experiment comprised a control group fed a diet without VM, and a VM group fed a diet containing 40 mg/kg VM. In the first experiment, the piglets were individually kept and an oral dose of K88 positive enterotoxigenic Escherichia (E.) coli (ETEC) was given as a sub-clinical challenge. The housing conditions in experiments 2 and 3 were according to practical standards. The results showed that the VM decreased feed conversion ratio and increased villus heights in conventionally kept piglets. Crypt depths were decreased in the individually kept piglets seven days after the ETEC challenge. Corpet (1999) and Anderson et al. (2000) reviewed the mode of action of antibiotics as feed additives and suggested that the antibiotics suppress bacterial activity and decomposition of bile salts resulting in a more slender villus structure. Increased villus heights indicated an increased mucosal surface and absorption capacity, which is in agreement with the improved precaecal nutrient digestibility of diets with VM, as observed by Decuypere et al . (1991). The difference in morphological response to the VM illustrated variation in the morphological characteristics between clinically healthy piglets.
In chapter 7 the effect of the use of the combination of two bioactive proteins, lactoperoxidase- system (LP-s) and lactoferrin (LF), on a milk replacer diet were investigated. This study examined the severity of diarrhoea, morphology of the small intestinal mucosa and the microbiology of digesta and faeces in young weaned calves.
Following weaning, the incidence of diarrhoea and mortality of calves is usually higher than that for unweaned calves (Reynolds et al ., 1981). In conventional calf production, antibiotics are added to the milk replacer to reduce gastrointestinal disorders caused by pathogenic bacteria in the gut. Recent legislation restricts the addition of antibiotics in diets for calves (EC, 1998) because of possible repercussions on human health (Van den Boogaard and Stobberingh, 1996).
LP and LF are both specific protein constituents of colostrum. These naturally occurring proteins are probably at least partly inactivated during the processing of milk because of their thermo-instability, and the remaining levels are not constant. Moreover, in dairy milk replacers a significant part of the protein is of vegetable origin and therefore lacks LP and LF.
The experiment with calves comprised the first two weeks post weaning. One group received a control diet and a second group a diet with LP-s/LF. Results showed that faecal consistency of the LP-s/LF group, as assessed by faecal consistency scores, was significantly improved compared to the control group. The numbers of E. coli in faeces were significantly lower and the villi in the distal jejunum more finger shaped and longer in those of the LP-s/LF group compared to the control group. These findings showed that the effects of LP-s/LF are mainly located in the distal region of the gastrointestinal tract. Reiter and Perraudin (1991) also showed positive effects of LP-s on live weight change in field trials. Still et al . (1989) studied the effects of a combination of LP-s and LF on the severity of diarrhoea in calves for a period 0 to 6 days after an experimental E. coli infection. They concluded that LP-s/LF had preventive and curing effects after the E. coli challenged infection. The results of the present experiment were in agreement with their observations.
Chapter 8 considers the functional-morphological implications of condensed tannins in faba beans (Vicia faba L.). The nutritional value of faba beans is limited by the presence of these tannins (Marquardt et al., 1977). Jansman et al. (1993) studied the effects of tannins on the apparent faecal digestibility of a control diet, a diet containing hulls of white flowering, low-tannin faba beans, and a diet with hulls of coloured flowering, high-tannin faba beans. They concluded that whole tract crude protein digestibility of the high-tannin diet was significantly (P < 0.05) lower than the control and low-tannin diets. This effect was partly explained by an increase of the endogenous fraction in the faeces and by an increase of the undigested tannin-feed complexes. In addition, the present study investigated samples of the proximal-, mid- and distal jejunum were investigated histologically and biochemically. The histological differences between the diets were not significant. However, differences in aminopeptidase activity were observed in the proximal small intestine. The amino-peptidase activity of the high tannin group was significantly (P < 0.05) depressed compared to the control and low-tannin groups. Furthermore, a correlation was calculated within the three groups between amino peptidase activity, as a functional parameter of the brush border, and the apparent faecal digestibility of CP, as a quantitative nutritional characteristic. No significant correlations were found between apparent CP digestibility and the aminopeptidase activity in the animals fed the control or low-tannin diet. But when the high tannin diet was fed, the correlation was significantly positive (P < 0.002; R = 0.91). This correlation indicated that a decreased aminopeptidase activity of the small intestine mucosa explained, at least in part, the effects of tannins on CP digestibility.