Skip to main content

Irrigation and water harvesting for croplands

The difference between irrigation and water harvesting lies in the level of control over the supply of water in relation to crop demands. An ideal irrigation system makes good the deficit between the water demand of the crop and the available precipitation. The simplest water harvesting system catches runoff from an area adjacent to the cropped field, and channels it to the cropped area, effectively increasing the water available to the crop during rainfall events and shortly afterwards. The difference between these extremes lies in the degree of buffering that allows collected water to be distributed according to crop demand rather than immediately during and after rainfall events. For pure irrigation, with unlimited supply, either from groundwater or reservoirs, the irrigation requirement on any day can be described by meeting a specified fraction of the crop demand, H:

where PE is the potential evapotranspiration, WUE is the water use efficiency of the crop at its current growth stage, r is the daily rainfall and k is the 'irrigation fraction' (0<=k<=1).

For pure water harvesting from local sources, the water added to a cropped area can be described by the ratio, β, of bare (crusted) collecting area with a storage capacity hb to a cropped area with averaged storage capacity hc. The total runoff , j, spread over the cropped area, from a rainfall event of r can then be estimated as :

For intermediate systems, where water harvesting is used to fill a storage reservoir, the reservoir filling rate is given by the term . Summing this over time, we must solve to determine the maximum irrigation fraction that can be supplied over the growing season:

The cumulative difference between storage tank filling and use for irrigation determines the size of reservoir required and its reliability over a series of variable years.