From 23-24 May this year, a group of feed specialists from ILRI, CIAT, ICARDA and partner institutes got together in Addis Ababa to further elaborate the TechFit tool. This followed from a March 2013 meeting that took stock of progress since the original November 2011 workshop in India. The meeting especially drew on experiences in using TechFit in Ethiopia last year as part of the Ethiopia Livestock Feeds and Africa RISING projects. These showed that the tool is a good start but needs quite a lot of further refinement to really help people set priorities for feed interventions.
How does TechFit work?
TechFit is designed to be used alongside FEAST – a ‘feed assessment tool’ to answer three main challenges holding back animal feed interventions:
- Placing feed in broader livelihood context
- Engaging farmer knowledge in design and ownership
- Neglecting how interventions fit local contexts, particularly land, labour, cash and knowledge.
FEAST, in brief, is a diagnostic instrument that helps researchers and development workers understand feed within the local context. It helps clarify whether livestock is an important livelihood strategy and, if so, the importance of feed problems relative to other problems. It also captures important information on the local situation in terms of labour, input availability, credit, seasonality, markets, etc.
It results in a relatively standard report with some ideas on key problems and solutions; the participatory process used also builds better links and understanding between farmers, research and development staff. Many reports of FEAST assessments are online (the tool can be downloaded here).
Once FEAST confirms that feed is an issue in a specific location, TechFit is used to help prioritize different interventions and technologies.
It works by ‘scoring’ the local context in terms of land, labour, credit, inputs and knowledge (these scores are normally generated by FEAST) and matching these with scores of attributes of a technology or an intervention contained in TechFit. The starting point is that each intervention or technology has rather standard attributes in terms of labour or land or inputs needed.
Running the local context score through the TechFit ‘filter’ (of technologies) generates a short list of prioritized interventions that can be further discussed with communities to assess adoptability and subjected to cost benefit analyses.
This is the essence of the approach.
Where is the tool now?
In March and May 2013, the various people got together to take the tool forward. Ethiopia experience in 2012 indicated some areas for attention – missing cost benefit analysis, incomplete list of interventions, incomplete scoring and missing ‘filters’ to help narrow down technologies suited, for example, to specific species, over-arching constraints or farming systems.
In May, a group of feed and animal nutrition scientists with experience in smallholder livestock feeding and production systems in Africa, Central and South America, and Southeast and South Asia sat down and scored 50 different interventions. The potential of each intervention to mitigate feed scarcity and quality, and its potential impact on animal production was discussed for different animal types, and production and farming systems. The group also scored the requirement of each intervention in terms of land, labour, capital, input delivery and knowledge. There were rich discussions, as the group compared experiences with the range of interventions in different parts of the world, and reached consensus on scores.
When scoring interventions, the group found that some were duplicated and could be combined, some were not yet sufficiently proven in smallholder systems and were excluded, some needed to be divided into two interventions as they could not be scored together, some were excluded as they were strategies rather than an intervention and so had no core attributes that could be scored, and some were added as they were missing from the original list.
The next step is to properly test the scoring of the interventions and the overall matrix to be sure that results generated make sense.
Werner Stür, lead consultant on the scoring process comments: “I enjoyed the scoring and feel confident that we are on the right track to make this a useful tool.”
Beyond the scoring – the heart of TechFit – participants also worked on the FEAST tool so it generates the context information needed by TechFit; they worked further on an ‘adoptability’ component and cost benefit analysis approach/tool that could be applied to interventions emerging from TechFit; and started on a manual for users/developers and the look and feel/design of a user-friendly tool. Initial ideas were also developed for a series of ‘factsheets’ on different interventions to complement the scoring matrix.
The coming months will see progress on all of these with September 2013 set as a target to have a fully refined and tested tool for wider use.
Reflecting on the process so far, TechFit champion Alan Duncan concludes: “Precisely because Techfit development is not project based, there is an energy and collegiality about its development which I like.”
Additional insights and feedback and offers of feed expertise are most welcome and should be addressed to Alan Duncan (firstname.lastname@example.org).