Free Standard AU & NZ Shipping For All Book Orders Over $80!
Register      Login
Animal Production Science Animal Production Science Society
Food, fibre and pharmaceuticals from animals
RESEARCH ARTICLE (Open Access)

The national Lifetimewool project: a journey in evaluation

J. J. Dart A G , M. Curnow B , R. Behrendt C , C. Kabore A , C. M. Oldham D , I. J. Rose D and A. N. Thompson C E F
+ Author Affiliations
- Author Affiliations

A Clear Horizon Consulting Pty Ltd, 129 Chestnut Street, Cremorne, Vic. 3121, Australia.

B Department of Agriculture and Food Western Australia, 444 Albany Highway, Albany, WA 6330, Australia.

C Department of Primary Industries Victoria, Private Bag 105, Hamilton, Vic. 3300, Australia.

D Department of Agriculture and Food Western Australia, 3 Baron Hay Court, South Perth, WA 6151, Australia.

E Present address: Department of Agriculture and Food Western Australia, 3 Baron-Hay Court, South Perth, WA 6151, Australia.

F Present address: School of Veterinary and Biomedical Sciences, Murdoch University, 90 South Street, Murdoch, WA 6150, Australia.

G Corresponding author. Email: jess@clearhorizon.com.au

Animal Production Science 51(9) 842-850 https://doi.org/10.1071/AN09099
Submitted: 3 July 2009  Accepted: 21 May 2010   Published: 14 September 2011

Journal Compilation © CSIRO Publishing 2011 Open Access CC BY-NC-ND

Abstract

The national Lifetimewool project commenced in 2001 and was funded until 2008. The objective of this project was to develop practical grazing management guidelines that would enable wool growers throughout Australia to increase lifetime production of wool per hectare from ewes. The project achieved its ambitious target of influencing 3000 producers to change their management of ewe flocks by adoption (or part thereof) of Lifetimewool messages and guidelines by 2008. The present paper focuses specifically on the evaluation work that was conducted on the project between 2003 and 2008. It is a noteworthy journey because it provides a case study of the effective implementation of an evaluation plan. The Lifetimewool project used ‘people-centred evaluation’ to help guide the creation of an internal evaluation plan. The six core principles followed were: participation; program logic, a people-centred focus; multiple lines of evidence; reflection and learning and a clearly documented and resourced evaluation plan. These principles were applied from the onset of the project. The Lifetimewool team used the evaluation findings to refine the initial design. Based on learnings from their evaluation journey, they created and modified the extension and communications components of the project. The present paper contends that the evaluation process itself enabled the project team to plan and adjust the course of the project through evidence-based reflection and that this helped ensure that the targets were achieved and demonstrated.

Introduction

There has been a noticeable rise over the past 10 years in the level of effort and investment in monitoring and evaluation in agricultural research and development projects as well as in a host of other disciplines. Despite this growth, an ongoing issue mentioned in the international literature on evaluation is that evaluation findings frequently do not get used to inform program improvement (Patton 1997). There is a dearth of successful case studies about how monitoring and evaluation have been effectively implemented and led to program improvement. The current paper presents a case study of the Lifetimewool project’s journey of evaluation. Lifetimewool was a national project that integrated new and existing knowledge of the impacts of ewe nutrition on ewe wool production (Ferguson et al. 2011), lamb birthweight and survival (Oldham et al. 2011) and progeny wool production and quality (Thompson et al. 2011) into regionally specific management guidelines for reproducing Merino ewes that optimise stocking rate, per animal performance and animal welfare (Young et al. 2011). It is a notable journey because it did succeed in building evaluation into the heart of a research and development project with profound learning for the team and positive and clearly demonstrated results for the project.

The Lifetimewool project commenced in 2001 and was funded until 2008. The project team and funders were determined to follow best practice with regard to project delivery. At the very start of the project, work was done to create a rigorously defined communication, adoption and evaluation plan to maintain focus on the project objectives and to demonstrate whether it achieved its anticipated outcomes. The project aimed to change the ewe management practices of at least 3000 producers nationally through adoption (or part thereof) of Lifetimewool messages and guidelines. Given the limited resources and short time span, this target can be said to be ambitious by most standards. Remarkably, the target was achieved ahead of time, and the evaluation findings were able to demonstrate this achievement (Jones et al. 2011).

The present paper focuses specifically on the evaluation component of Lifetimewool between 2003 and 2008. After providing a description of the Lifetimewool project, an overview of the particular approach to evaluation is provided. The project used ‘people-centred evaluation’ (PCE) (Dart and McGarry 2006) to help guide the creation of an internal evaluation plan. The six core principles of people-centred logic are: participation; program logic, a people-centred focus; multiples lines of evidence; reflection and learning and a clearly documented and resourced evaluation plan. After introducing this particular approach to evaluation, this paper offers a description of how these principles were applied in the context of the Lifetimewool project. For each principle, the positive and negative lessons associated with implementing this feature of the evaluation approach are provided.

The paper then progresses into a discussion on how the findings from the evaluation process affected the project itself, and helped the team to achieve the ambitious target ahead of time. The present paper contends that the evaluation process itself enabled the project team to plan and adjust the course of the project through evidence-based reflection and that this helped ensure that the targets were achieved and demonstrated. The paper concludes by drawing out several key recommendations that may be beneficial to other projects when embarking on a similar journey in evaluation.


Background to Lifetimewool

The Lifetimewool project commenced in 2001 and formally ended in 2008. Lifetimewool was a collaborative project jointly funded by Australian Wool Innovation Limited, the Department of Primary Industries Victoria, the Department of Agriculture and Food Western Australia, the New South Wales Department of Primary Industries, Water and Environment, the South Australian Research and Development Institute and Tasmanian Department of Primary Industries. The project spanned 17 sites in southern Australia (Victoria, Western Australia, New South Wales, South Australia and Tasmania). These sites comprised two plot-scale research sites and 15 paddock-scale research and demonstration sites. All sites were located on commercial wool-producing properties, with between 350 and 700 mm annual rainfall.

The objective of the Lifetimewool project was to develop, demonstrate and communicate practical grazing management guidelines that would enable wool growers throughout Australia to increase lifetime production of wool per hectare from ewes and their progeny by 20% (equating to ~3000 wool producers) without compromising wool quality or the environment, by 30 September 2008.

Research began in 2001 with plot scale sites in Victoria and Western Australia. The research is described more fully elsewhere (Ferguson et al. 2011; Oldham et al. 2011; Thompson et al. 2011a, 2011b). The results from the plot-scale sites were verified using commercial flocks on 15 farms across southern Australia (Behrendt et al. 2011). At the paddock-scale sites, ewes were managed to a ‘high’ or ‘low’ profile based on condition score or fat score. This was done to verify the results from the plot scale on a commercial scale, with a range of environments and regions. In cooperation with the wool growers managing the paddock-scale sites, a series of grazing guidelines and tools was developed to assist wool growers in increasing their farm profit by managing their ewes differently. The practicality and effectiveness of these guidelines and tools were tested during the next phase of the project; the ‘demonstration phase’. The demonstration phase involved a further 130 wool growers and the structure of the phase was different in each state (Curnow et al. 2011).

The plot-scale sites, the paddock-scale sites and the demonstration phase together aimed to develop, demonstrate and communicate practical grazing management guidelines. In the communication plan it was specified that 30% of wool growers were to be aware of Lifetimewool by September 2008. It was also specified that 3000 wool growers would have changed the way they manage their ewes to better reflect the key messages in the Lifetimewool guidelines.


People-centred evaluation

The underpinning approach to evaluation applied by the Lifetimewool project was an early version of what was later named PCE (Dart and McGarry 2006). PCE was developed by Jess Dart from Clear Horizon Consulting who was also the contracted consultant to the Lifetimewool project. PCE is a practical approach that enables program teams to develop their own evaluation frameworks. The overall objective of PCE is to create an evaluation plan that is simple enough to be picked up and owned by project staff, yet comprehensive enough for staff to be able to manage, without a high need for external assistance (Dart and McGarry 2006). The following paragraphs describe the process steps and principles of PCE as applied in the Lifetimewool project.

Process steps for developing a PCE plan

The basic chronology of steps used in the creation of a PCE plan is illustrated in Fig. 1. This is developed at a 1- or 2-day workshop.


Fig. 1.  Process steps used to develop a monitoring, evaluation and learning plan.
F1

Specific approach to program logic

‘Program logic’ can be defined as the rationale behind a program or project – what are understood to be the cause-and-effect relationships between project activities, outputs, intermediate outcomes and ultimate outcomes. Represented as a diagram or matrix, program logic shows a series of expected consequences, not just a sequence of events (Dart 2005). Owen describes this as a form of design clarification (Owen 1993). In the international literature, this tool is usually referred to as ‘program logic’. However, program logic can be applied at the project, subproject or even initiative level. It should be noted that there is little consensus with regard to terminology. Some people may use terms such as ‘program theory’, ‘program logic’, ‘theory of action’, ‘results logic’ and ‘intervention logic’ interchangeably.

Program logic is best used in a participatory manner and is noted for enabling groups to come to consensus about the realistic outcomes and goals of a project. Ideally, program logic is mapped out before implementation, modified and referred to throughout the life of a project. This way it can provide quick feedback concerning the integrity of the project design.

In PCE, program logic forms the spine of the evaluation system. First, it works at the planning stage by helping groups to surface the underlying logic of their planned program. Once exposed, this logic and the associated assumptions can be evaluated and refined, leading to a more robust program design. Second, it helps groups develop an evaluation plan for the life of the project and guides the development of effective key evaluation questions and performance indicators. The program logic is revised regularly (e.g. each year) to reflect any changes in the project direction, and to help program teams gain a shared understanding of any emerging outcomes. It is an effective focusing tool, helping to remind program teams of the bigger picture. Finally, it can be used to structure evaluation reports.

People-centred logic model

Alternatively referred to as ‘reach’ (Montague 1998), the term ‘people-centred’ refers to the particular way program logic is created around key people targeted by the program. Many program logic models, such as the logical framework (Farrington and Nelson 1997), do not specifically make reference to who the project is targeting. In many research and development projects, the ‘logical framework’ is the predominant method used. Often logical frameworks have references to things such as ‘40% increase in production’, without qualifying who is increasing the production.

According to Montague (1998), logic models that do not make reference to who and where action is taking place suffer from several problems. Most importantly, they lack the sensitivity to the impacts on different participant groups. In addition, there are several practical reasons why people-centred logic models are useful:

  • Most practice change occurs through people. Change happens by influencing people, e.g. consultants, farmers and policy makers.

  • It makes sense on a practical level. Ultimately we have to ask people for information when collecting data in an evaluation. Therefore, if we organise our evaluation planning around the different categories of stakeholders we need to engage with at the start, it also helps us work out who we need to speak with in the monitoring and evaluation work.

  • It helps distinguish between the different levels of impact experienced and anticipated for different participant groups. For example, the difference experienced between ‘innovators’ and those who are more conservative.

Other program logic approaches that include components of reach include Mayne’s (1999) results expectations charts and the Kirkpatrick scale (Kirkpatrick 1975) which is used to help evaluate training programs.

Core principles

Although the people-centred program logic is a key distinguishing feature of PCE, this approach is also concerned with measurement, evaluation, and learning and reporting. All components of PCE are governed by a set of principles that include:

  1. Participation. The best people to develop an evaluation plan for their program are the program team, with input from program stakeholders where appropriate.

  2. Program logic. The development of a program logic model is a core part of PCE, and it is done in a workshop.

  3. People-centred. The logic model should be developed around consideration of who the program is trying to target. This means developing different threads of the logic model for different targeted stakeholders. From here onwards, all methods of evaluation, monitoring tools and formats are developed with reference to these identified stakeholder groups. Even project objectives are developed with reference to stakeholders.

  4. Multiple lines of evidence. PCE advocates that key evaluation questions are best addressed using multiple methods. For example, quantitative data are enhanced by more in-depth qualitative inquiry.

  5. Reflection and learning. PCE stresses the importance of building formal processes for staff to interpret findings and reflect on progress.

  6. Fully document and resource the evaluation plan.


Implementing PCE in Lifetimewool

The following paragraphs describe how the Lifetimewool project implemented the six principles associated with PCE, and what learning and challenges arose in relation to this.

Principle 1: participation in the evaluation process

Initial planning workshop

On 25 June 2003, the Lifetimewool team held a workshop to develop a preliminary evaluation plan for the project. The 1-day workshop was attended by all the project staff, and a funder. The workshop was facilitated by an evaluation consultant, who invited all the staff to take part in mapping out the program logic model, with the role of the consultant being facilitation. This model was created on the floor with sheets of paper so that everyone could physically lay their hands on the model. Once the model was clarified, the facilitator helped the participants create a set of key evaluation questions to guide the evaluation methodology (see Fig. 2). A year later, the logic model was reviewed at a whole-of-team meeting and modified based on what had been learned from the first year of project implementation.


Fig. 2.  Simplified people-centred program logic model for the Lifetimewool (LTW) project.
F2

On-going capacity building

For the project team to fully participate in the evaluation journey, it was felt that they would need sufficient skills and understanding of evaluation itself. After the first evaluation workshop, some of the team members became excited at the prospect of learning more about how to use monitoring and evaluation to help achieve outcomes. Several staff members volunteered to do a 5-day course in monitoring and evaluation. In addition, the evaluation consultants shared their insights and knowledge on evaluation with the project team wherever possible. In some cases, this involved consultants working as counterparts to project staff.

Whole-of-project team annual reflection workshops

To maximise the chances of the evaluation findings being picked up by the whole project team, annual reflection workshops were held. Here, staff presented their science results to each other, as well as reflecting on the evaluation findings from the extension and communication components of the project. At the end of each workshop, a set of actions was created to ensure that the project was modified as a result of what was learned.

What was learned about participation

Overall, there was relatively high participation from project staff in the evaluation process. Having a key funder at the first workshop eased the process by which the evaluation plan was ratified. Participation in the creation of the plan also led to a high degree of understanding and ownership amongst the project team. However, there was uneven participation across the state-based teams and this was accompanied by considerable difference in how the project was implemented in each state. These differences can be explained to some extent by the fact that the regions varied in real terms in their wool production systems, norms and practices and, hence, the applicability of the Lifetimewool messages and guidelines varied. In addition, there were several staff changes during the project, and this lead to some inconsistency of effort in terms of the evaluation. Despite this, there was a remarkable degree of enthusiasm and commitment for the evaluation process from several key members of the project, from 2003 right through until 2008, and this is evidenced by the fact that nearly all aspects of the evaluation plan were fully implemented. It also built the capacity of many staff, most of whom had little prior experience with evaluation.

Principle 2: how program logic was applied in Lifetimewool

First, program logic was used at the planning stage in the Lifetimewool project and helped the groups clarify (and challenge) the underlying theory of change for the planned project. Second, it informed the Lifetimewool evaluation plan for the life of the project, by guiding the development of effective key evaluation questions and performance indicators. The program logic was revised after the first year to reflect what had been learned about the project after the first year of implementation. Finally, it was used to structure evaluation reports and tell the ‘performance story’ of Lifetimewool. A simplified version of the program logic model for Lifetimewool is shown in Fig. 2.

What was learned about program logic

The participatory visual logic-mapping exercise helped the team create a shared understanding of the desired theory of change for the project, and the model stayed fairly consistent over 5 years. However, during the first workshop, it became apparent that initially not all participants shared common views about what a good program logic model should look like. At times there was conflict in the workshop and a key learning would be that the first workshop must be facilitated carefully because it can be highly challenging for participants.

Principle 3: how the people-centred concept was applied in Lifetimewool

PCE acknowledges that a program may need to target different types of stakeholders and that we may utilise different instruments for different targeted stakeholder groups. During the development of the program logic model in the Lifetimewool, project participants conducted a form of stakeholder analysis, resulting in the identification of key categories of people to be targeted by the project. The clustering was done on the basis of the type of influence the program intended to exert on each of the stakeholder groups. In the Lifetimewool project, the following groups of stakeholders were identified in the first evaluation workshop:

  • producers: innovators,

  • producers: aspirants,

  • producers: majority,

  • extension workers and consultants who work with wool producers, and

  • scientific peers.

What was learned from the people-centred focus

The stakeholder analysis done as part of the logic was critical, as it provided a focus on not only producers but also extension workers and consultants. Before this workshop, there had been no clear intention to target extension workers and consultants. In the end, focusing on these highly influential groups was found to be a critical success factor in achieving the target ahead of time. However, in reflection, the project team felt that even more focus could have been paid to this step; many felt that a full market segmentation exercise would have helped the project team target the producers more accurately with tailored messages, and also would have helped the team members understand more deeply which sort of producers the project had influenced.

Principle 4: how multiple lines of evidence were applied in Lifetimewool

Fig. 3 illustrates the different methods that were used in the Lifetimewool evaluation plan. International literature encourages the use of multiple lines of evidence to provide a plausible ‘portfolio of evidence’. The key evaluation questions were addressed by eight evaluation methods. In the following paragraphs, three of the key methods are elaborated in more detail.


Fig. 3.  Relationship between the key evaluation questions and the proposed methods.
F3

Participatory ‘seasonal calendars’

Seasonal calendars were used at most of the initial workshops to help understand the context of wool production at each pilot site. To create the calendars, participants formed groups and then constructed timelines on flip chart paper, showing the main activities associated with management of ewes for each month from joining to joining. In each case, the facilitators’ role was to listen, and prompt producers to discuss and record activities of particular interest to Lifetimewool. Information from the calendars and from participants’ conversations was recorded by facilitators onto summary sheets. At the end of the activity, calendars were presented to the audience, which often generated additional discussion. This information was supplemented with short questionnaires completed by participants. Participants generally responded well to the seasonal calendar exercise, and the calendars were found particularly useful in helping the project team characterise wool enterprises in different regions and assist them to adapt research presentations to the audience. On the downside, the success of this exercise was dependant on the skills and confidence and interest of the facilitator; it certainly worked better in some locations that others.

In-depth case studies with 15 producers

The first Lifetimewool paddock- and plot-scale trials were held on five wool-producing properties in Victoria during 2003 and 2004. Midway through 2004, it became evident that four of five of the producers involved had already begun to modify elements of their ewe and pasture management systems to reflect Lifetimewool concepts and ideas. A decision was made to research and monitor the practice change of the five Victorian producers as well as with additional producers commencing trials in other areas of Australia. The underlying premise was that, if these intimately involved producers (who are considered to be above-average operators, with most potential for adopting change) do not respond positively to Lifetimewool concepts and ideas, then it is unlikely that other producers will.

As a result of this finding, it was decided to conduct 15 in-depth case studies across Victoria, Western Australia and South Australia with producers intimately involved in the program. The qualitative exploration of their reaction and practice changes highlighted the importance of understanding the context surrounding their decisions, and the constraints and opportunities for adoption of Lifetimewool concepts and ideas generally. This information was used for two purposes. First, it helped guide the project in its research and particularly influenced the extension phase. Second, it informed the development of a measurement tool that was named the ‘platforms for change’, which was used to describe practice changes across a broader sample of farmers.

The idea was that adoption could be best described by explaining how farmers changed their behaviour from one platform to another. Fig. 4 illustrates an example of platforms for ewe management. Here, the practices of five farmers are illustrated by describing their movement on these platforms. The first box presents the situation in 2003, and an arrow points to the second box that indicates the practice change in 2004. Three platforms were developed after closely examining the practice changes of the 15 case study farmers (the key practices were ewe monitoring, pasture monitoring and pregnancy monitoring). These ‘platforms of change’ then informed the national survey conducted in 2005 and 2008.


Fig. 4.  Example of a ‘platform of change’.
F4

National survey conducted in 2005 and 2008

As well as collecting qualitative and exploratory data, it was critically important for the team to get some idea of the influence of the project at a national scale. To this end, a national survey was conducted in 2005 and repeated in 2008. In October 2005, 1926 farmers (with more than 500 sheep) were surveyed by telephone. The response rate to the telephone survey was 99%, with the majority consenting to be resurveyed in 2008. The survey results were analysed using SPSS package, with the help and advice of a statistician (see Jones et al. 2011 for more information on the results of this survey).

What was learned about using multiple lines of evidence

The different qualitative and quantitative methods worked well to inform different phases of the program; for example, the in-depth qualitative methods worked well earlier in the project to help the researchers understand the barriers and differences between producers and allowed them to develop measures that would be sensitive to these differences. Ultimately the different methods corroborated the results and demonstrated that the target had been achieved. The key area where the use of multiple methods could have been strengthened was around the need to schedule more time for interpretation of the data.

Principle 5: how reflection and learning was applied in Lifetimewool

In PCE, program improvement is encouraged by way of an annual analysis of data, in which both qualitative and quantitative data are examined by program staff and key stakeholders; this is referred to as an annual reflection workshop. After the data have been reviewed, interpretations are drawn and recommendations are developed in a participatory manner. In the Lifetimewool program, staff from different States participated in this process which was guided by an expert facilitator and involved a mixture of small-group and large-group processes.

In 2004, the first ‘annual reflection’ was held at which all of the evaluation information was collected, collated and analysed in relation to the program logic to determine how the team was proceeding in terms of delivering the required outcomes. This workshop was repeated in 2005, 2006 and 2007. These workshops were a vital opportunity for the national team to share both the research and the evaluation findings together, and to make decisions about how to proceed. Much of the focus of discussion was about how to disseminate the research results to producers most effectively. Evaluation findings were keenly examined and discussed and decisions were jointly made about how to modify the program and communications plan to help achieve the targets.

What was learned about reflection and learning applications in Lifetimewool

The annual reflection workshop was found to be a key technique for ‘closing the loop’ and insuring that evaluative evidence was examined by staff and used to inform the next phase of the project. In addition, when it becomes ‘normal’ to examine data and use them to make decisions, this has a positive effect on the quality of the data. When judgements could not be made due to a lack of relevant data, this usually led to modification of the data-collection systems themselves.

An early lesson for the annual reflection workshops concerned the importance of having a good facilitator/chair. Each year, the reflection workshops ran more smoothly as the team became accustomed. The majority of staff felt that the workshops were vitally important, and were the key time when learnings were shared and built into the program.

Principle 6: documenting and resourcing the evaluation plan

The evaluation plan for the Lifetimewool project was described in an eight-page document. The plan consisted of a program logic model’s five ‘key evaluation questions’ and a proposed methodology to address these core questions. Fig. 3 illustrates the relationship between the key evaluation questions and the proposed methods.

In addition to this, the project team created a budget for the implementation of the evaluation and was successful in raising the funds. The budget accommodated the salary for a dedicated monitoring and evaluation officer, in addition to funds for help from external consultants. This was one of the first times these funders had invested in evaluation to this magnitude, and in some sense it was a pilot in respect to experimenting in what substantial investment in evaluation could achieve.

What was learned about documentation and resourcing of the evaluation plan

A key learning from the Lifetimewool journey is the importance of taking monitoring and evaluation seriously from the onset of a project. This means having a well-documented plan and having the resources to actually implement it. Having a documented plan helped ensure a smooth hand-over when there were staff changes, which is always inevitable in a project that spans multiple years.


Discussion

The effectiveness of an evaluation plan was judged here by the extent to which it informed the program, helped demonstrate goals and helped the team maintain a shared understanding of the project. These three criteria are explored in the paragraphs below.

The team consistently used the findings of the data to change the way they ran their extension and communications programs

Several times in the life of the project, the Lifetimewool team waited keenly for the results of an evaluation study and applied these results almost instantly. One example was an internet survey conducted in 2004, to which 51 service providers responded. These respondents provided advice to wool producers across Australia and were consulted to understand their knowledge and practice with regard to the management of ewe nutrition. The informants comprised 26 private-sector consultants and 25 government extension officers from five states of Australia. The survey also posed questions about how extension workers would like to receive information into the future. Findings from this survey were used immediately to modify the products being sent to these people.

One of the success factors of this approach to evaluation was the combination of both short-cycle and longer-cycle results. Rather than waiting until the end of the project, evaluation findings emerged all the way through the project. In the first year, the entry surveys helped the staff to tailor the initial workshops to suit the context of each site. Later, the case studies provided a deeper understanding of the current practices of different types of farmers and how they might be likely to change over time. These case studies also informed how change was measured from thereon. The surveys of extension workers and consultants gave instant feedback on how the messages were coming across and how they needed to be modified to meet different client needs. For example, it was found that many of the more reluctant wool producers were awaiting evidence of economic implications of practice change. As a result of this learning, a greater emphasis was placed on exploring economic scenarios in the communication materials.

The evaluation enabled the team to demonstrate that they had achieved their targets

The data from the national farmer survey indicated that Lifetimewool had achieved the desired aim of having 3000 wool producers change their practices (Jones et al. 2011). These changes had occurred in just 3 years, and the impact on the industry could also increase substantially in future if this or other projects are able to continue their services and products. These data were backed up by the results of the consultant and specialist extension-worker surveys. Not only did consultants state that they have taken on and are delivering the Lifetimewool messages with their clients, but they also recognised that their clients have made changes to their practices based on this information (Jones et al. 2011).

The evaluation helped the team gain a shared understanding

Several team members commented that the logic model and the subsequent annual reflection workshops helped provide a more fully shared understanding of what the project was trying to achieve. This was also helped by the communications work, which developed a series of key messages. The participatory evaluation process also led to many learnings for several of the project staff, many of whom had never encountered evaluation of this type before. Several staff members have gone on to apply these skills in the new jobs after the project ended; e.g. some of the staff are now using similar evaluation processes in the Cooperative Research Centre (CRC) for wool.


Conclusions

The PCE system used by Lifetimewool involved systematic collection of both quantitative and qualitative data and regular reflection on the results. The team used the data to help modify the extension component of the project and ultimately was able to demonstrate the outcomes that it was aiming to achieve, as well as detecting unexpected outcomes.

The key features of the PCE approach that Lifetimewool applied were that it was participatory and helped staff come to a more shared view of the impact of the project. It involved short cycles of review and reflection that can lead to immediate project improvement, which in the longer term also enabled the project team to demonstrate the results. A recommendation for others wishing to use participatory evaluation would be to ensure they include the services of an expert facilitator.

It is hoped that the Lifetimewool case study will inspire future project teams to consider allocating adequate resources to monitoring and evaluation from the outset of the project. The main area where the people-centred approach could be strengthened was associated with the lack of rigorous market segmentation at the onset. PCE combines well with a market-segmentation approach and can help teams achieve more results and demonstrate them.



Acknowledgements

Lifetimewool (EC298) has been funded by Australian Wool Innovation Limited, the Department of Primary Industries Victoria, the Department of Agriculture and Food Western Australia, the New South Wales Department of Primary Industries, Water and Environment, the South Australian Research and Development Institute and Tasmanian Department of Primary Industries. The project team are indebted to the 150 wool growers who contributed their time and experience during the project.


References

Behrendt R, van Burgel AJ, Bailey A, Barber P, Curnow M, Gordon DJ, Hocking Edwards JE, Oldham CM, Thompson AN (2011) On-farm paddock-scale comparisons across southern Australia confirm that increasing the nutrition of Merino ewes improves their production and the lifetime performance of their progeny. Animal Production Science 51, 805–812.
On-farm paddock-scale comparisons across southern Australia confirm that increasing the nutrition of Merino ewes improves their production and the lifetime performance of their progeny.Crossref | GoogleScholarGoogle Scholar |

Curnow M, Oldham CM, Behrendt R, Gordon DJ, Hyder MW, Rose IJ, Whale JW, Young JM, Thompson AN (2011) Successful adoption of new guidelines for the nutritional management of ewes is dependent on the development of appropriate tools and information. Animal Production Science 51, 851–856.
Successful adoption of new guidelines for the nutritional management of ewes is dependent on the development of appropriate tools and information.Crossref | GoogleScholarGoogle Scholar |

Dart JJ (2005) Evaluation for farming systems improvement: looking backwards, thinking forwards. Australian Journal of Experimental Agriculture 45, 627–633.
Evaluation for farming systems improvement: looking backwards, thinking forwards.Crossref | GoogleScholarGoogle Scholar |

Dart J, McGarry P (2006) People-centered evaluation, In ‘Proceedings of the AES 2006 international conference’, Darwin. (Australasian Evaluation Society: Canberra)

Farrington J, Nelson J (1997) ‘Using logframes to monitor and review farmer participatory research.’ Overseas Development Institute, Network Paper No. 73, London.

Ferguson MB, Thompson AN, Gordon DJ, Hyder MW, Kearney GA, Oldham CM, Paganoni BL (2011) The wool production and reproduction of Merino ewes can be predicted from changes in liveweight during pregnancy and lactation. Animal Production Science 51, 763–775.
The wool production and reproduction of Merino ewes can be predicted from changes in liveweight during pregnancy and lactation.Crossref | GoogleScholarGoogle Scholar |

Jones A, van Burgel AJ, Behrendt R, Curnow M, Gordon DJ, Oldham CM, Rose IJ, Thompson AN (2011) Evaluation of the impact of Lifetimewool on sheep producers. Animal Production Science 51, 857–865.
Evaluation of the impact of Lifetimewool on sheep producers.Crossref | GoogleScholarGoogle Scholar |

Kirkpatrick D (1975) Evaluating training programs. (American Society for Training and Development) Available at http://www.businessballs.com/kirkpatricklearningevaluationmodel.htm [Verified July 2006]

Mayne J (1999) Addressing attribution through contribution analysis: using performance measures sensibly. Available at http://pmn.net/library/Library.htm [Verified July 2006]

Montague S (1998) ‘Build reach into your logic chart.’ Available at http://pmn.net/library/build_reach_into_your_logic_model.htm [Verified July 2006]

Oldham CM, Thompson AN, Ferguson MB, Gordon DJ, Kearney GA, Paganoni BL (2011) The birthweight and survival of Merino lambs can be predicted from the profile of liveweight change of their mothers during pregnancy. Animal Production Science 51, 776–783.
The birthweight and survival of Merino lambs can be predicted from the profile of liveweight change of their mothers during pregnancy.Crossref | GoogleScholarGoogle Scholar |

Owen JM (1993) ‘Program evaluation, forms and approaches.’ (Allen and Unwin: St Leonards, NSW)

Patton MQ (1997) ‘Utilization focused evaluation.’ (Sage: Thousand Oaks, CA)

Thompson AN, Ferguson MB, Campbell AJD, Gordon DJ, Kearney GA, Oldham CM, Paganoni BL (2011a) Improving the nutrition of Merino ewes during pregnancy and lactation increases weaning weight and survival of progeny but does not affect their mature size. Animal Production Science 51, 784–793.
Improving the nutrition of Merino ewes during pregnancy and lactation increases weaning weight and survival of progeny but does not affect their mature size.Crossref | GoogleScholarGoogle Scholar |

Thompson AN, Ferguson MB, Gordon DJ, Kearney GA, Oldham CM, Paganoni BL (2011b) Improving the nutrition of Merino ewes during pregnancy increases the fleece weight and reduces the fibre diameter of their progeny’s wool during their lifetime and these effects can be predicted from the ewe’s liveweight profile. Animal Production Science 51, 794–804.
Improving the nutrition of Merino ewes during pregnancy increases the fleece weight and reduces the fibre diameter of their progeny’s wool during their lifetime and these effects can be predicted from the ewe’s liveweight profile.Crossref | GoogleScholarGoogle Scholar |

Young JM, Thompson AN, Curnow M, Oldham CM (2011) Whole-farm profit and the optimum maternal liveweight profile of Merino ewe flocks lambing in winter and spring are influenced by the effects of ewe nutrition on the progeny’s survival and lifetime wool production. Animal Production Science 51, 821–833.
Whole-farm profit and the optimum maternal liveweight profile of Merino ewe flocks lambing in winter and spring are influenced by the effects of ewe nutrition on the progeny’s survival and lifetime wool production.Crossref | GoogleScholarGoogle Scholar |