- 5.1 The challenges of PLEI evaluation
- 5.2 Connections between goals of PLEI and its evaluation
- 5.3 Challenges of funders' evaluation requirements
- 5.4 Evaluation methodologies
- 5.5 Research and knowledge gaps
The summaries set out in this section cannot claim to be comprehensive. Drawing on all of the materials annotated, a collection of findings and observations have been complied under the four recurring themes. The points listed under each theme represent some of the significant conclusions based on frequency or force with which the statements were made. A fifth category is added to the end, which highlights research and knowledge gaps in the available materials.
After reviewing the annotations, it should be abundantly evident that numerous challenges continue to face the evaluation of PLEI initiatives. These challenges include, but are not limited to, the following:
- Language barriers are common in eliciting PLEI feedback. This includes those who do not speak English as well as the use of "legalese" of complex legal terminology to teach concepts.
- PLEI has no concise definition and it is often difficult to draw a line between provision of information versus legal advice.
- There is ongoing debate between whether internal or external evaluators are preferable, where cost is not at issue. On one side PLEI providers seek objectivity through external evaluation. On the other are those who maintain that a connection to the program and the community is a necessary component to evaluation as it ensures sensitivity to the context which informs one's assessment of project benefits.
- High cost of external evaluations or "lost work time" for staff continues to pose difficulties for undertaking program evaluation.
- Difficulty in measuring stated goals i.e., when accessibility to legal information is the goal it is often difficult to assess why/how the program failed as targeted people were NOT reached.
- Different impacts based on different delivery methods cannot be compared i.e., it is difficult to compare a pamphlet with a three-day workshop.
- Some PLEI initiatives attempt to address collective and chronic needs of low-income people, yet evaluation of community impact may be difficult to quantify. As such, these desired goals raise challenges for successful documentation.
- Difference in race, class and social contexts between the evaluators and clients/students/ participants affects perceptions of programs and measures of success.
- Lack of definition or consensus on what constitutes evaluation. One author is quoted as saying that evaluation is "all things to all people." In addition to external evaluations, many activities can and should be considered legitimate forms of evaluation. Some examples include: comparative studies, pre-testing of materials, mail-back evaluations, impact studies, periodic studies, in-service training, annual reports, analysis of media news coverage, assessment of mailing lists, audience feedback, monitoring statistics, compilation of statistics, follow-up with clients, and reports to funders (see Lane, 1984).
- When the prevalent method of evaluation is oral feedback through stories and discussions, it can be difficult to document responses and replicate account of impact.
- Lack of evaluation skills among staff and lawyers.
- Hard to measure PLEI tools such as Internet programs.
- Post program surveys do not assess whether information is retained or whether it is used.
- There are multiple obstacles to determining whether there is overlap and duplication of PLEI.
Organizations, schools, agencies and governments use PLEI to fulfill a wide range of objectives. The materials suggest that in developing tools and methods of evaluation these specific goals and objectives should be taken into account. Some related conclusions include:
- The importance of defining what is being evaluated cannot be overestimated. One PLEI provider articulated the question poignantly:
"Are we trying to measure knowledge of specific pieces of legislation or critical consciousness regarding a certain piece of legislation, or critical and informed debate, government policy, or self-help skills, or empowerment in local communities, or what?"(Moliner, 1997, p.43).
- Goals and objectives of evaluation must be sensitive to target audiences and learning environments.
Evaluations are often conducted and/or requested by PLEI funders who may have different objectives for seeking and using findings of such reports. These objectives pose the following particular challenges:
- Need for funders to "accept responsibility" for PLEI projects that they support which can be done by assisting with meaningful evaluation.
- Impact assessments must go beyond asking simple questions such as gauging satisfaction if they are to contribute to improving program delivery. Important elements of evaluation include accountability for delivering proposed programs and general approval. However, these measures do not sufficiently address effectiveness and suitability of program content and format.
- Funders can use evaluation to undermine efforts of PLEI by narrowly interpreting goals and achievements to justify funding cuts.
- The danger of using evaluations to cut programs is greater where PLEI is tied to provincial legal aid offices (B.C. and Ontario) since budget restrictions and changes may be imposed with little or no warning.
- Misinterpretations of evaluations lead to skewed funding and priority decisions.
- As noted in a 1984 legal aid evaluation, budget restraints changed both the levels of service actually provided and the direction of policy emphasis.
- Lack of stable funding for many providers prohibits the development of innovative mediums for conveying PLEI to target audiences. It also undermines explorations for new and emerging issues and initiatives. Issues such as improving access to the law, legal literacy, readability and so forth, are being investigated primarily by those with secure funding.
- Evaluations set by funders often do not account for culturally specific needs and therefore do not accurately assess effectivity.
- Different emphasis and goals between funders and PLEI providers leads to distinctive modes of evaluation i.e., quantitative versus qualitative and deliverables versus non-tangibles.
Available evaluation methodologies are vast. The following list notes those methods that are frequently cited in the materials, but is by no means exhaustive:
- Range of surveying and questionnaire distribution
- Informal discussions, semi-structured interviews, case-specific follow up and file review
- Focus groups and consultations
- Pre- and post-testing of knowledge and attitudes
- Peer review and expert assessment of legal accuracy
- Increasingly, it is recognized that a combination of qualitative and quantitative methods should be used in conjunction. This mixed approach produces more comprehensive evaluations and encourages development of creative methods of data collection and documentation.
This bibliography covers the academic literature and a thorough sampling of evaluation reports and government documents available at this time. The gaps outlined below are pertinent to PLEI assessment and should be contemplated as the bases for future research.
- General lack of PLEI evaluation, with exceptions in one or two organizations.
- Need more publications that consolidate PLEI materials i.e., Inventory of Public Legal Education and Information Materials and Programs Related to Crime Prevention and Victims; the literature review section of More Than a Crime: A Report on the Lack of Public Legal Information Materials for Immigrant Women Who Are Subject to Wife Assault; and the Compendium of Sources in YJEP report.
- Little exploration has occurred for partnership between the provincial/territorial bar, ministries, legal clinics and community organizations.
- Poor utilization of data collected. Time constraints often mean that once information is collected it gets filed away and never "assessed" or incorporated into subsequent program design. Also there is a general failure to share findings between organizations, which results in duplication of efforts. Finally, the lack of negotiation and/or consultation with external evaluators, in terms of content and methods, produces materials that are unhelpful for purposes of program design, or are disregarded completely due to bad feelings generated though the evaluation process.
- Few evaluation reports reviewed noted any external uses for the findings or recommendations.
- Ongoing gap in needs assessment and priority setting.
- No evaluation reports, PLEI or otherwise, with the exception of one or two mentioned the connection between program evaluation and monetary constraints.
- More investigation is required of the impact of technology on access to PLEI.
- With one noted exception, there is no discussion regarding the interrelationship between different evaluation tools and methodology i.e., between quantitative and qualitative data.
- There appears to be a bias in the evaluation reports for "real" or quantifiable data. As is noted elsewhere in the bibliography, this presupposition does not necessarily reflect the objectives and goals of many PLEI initiatives, which are better suited to various qualitative methods of evaluation.
- Date modified: