CHAPTER TWO
PROCESS EVALUATION

I. Introduction

In this chapter we describe the elements necessary for conducting a process evaluation for a responsible fatherhood program. We begin with a brief overview of the reasons why conducting a process evaluation in conjunction with an impact evaluation is useful, and then describe the evaluation questions and major data sources that can and should be incorporated into a process evaluation of responsible fatherhood programs. We then provide a detailed description of various data collection methods that may be used for obtaining new (primary) and existing (secondary) data. We also provide an overview of an automated participant-level data system that could be used by responsible fatherhood programs to track participant characteristics, service utilization, and outcomes. The chapter concludes with examples of possible descriptive, comparative, and exploratory analyses that could be conducted to address key process evaluation questions.

II. Purpose of a Process Evaluation

A process evaluation provides contextual information to support analyses of program outcomes, net impacts, and costs. For example, it can provide information about how fathers are recruited to the program and how they are served once they are in the program. The types of information collected under a process evaluation are not only vital inputs for helping to assess program effects, but also provide feedback that can be helpful in efforts to refine the program intervention and to support replication of successful program components at other locations. A process evaluation can tell us if the underlying model for the program was implemented with integrity, as well as identify variations in treatment and participants. It can identify key similarities and differences across program sites in program objectives, participation levels, service delivery strategies, the environment, and a variety of other areas.

The major objectives of a process evaluation for a responsible fatherhood program should be to:

The information and insights obtained through conducting a process evaluation are extremely useful, and in many cases necessary, for evaluators to develop and conduct an impact evaluation.

III. Questions Addressed by a Process Evaluation

There are a number of questions that should be addressed by a process evaluation of responsible fatherhood programs. Each of these questions need to be addressed for each individual program being evaluated, and if there are multiple sites within a program being evaluated (e.g., IRFFR sites in Cleveland, San Diego, and other localities), then for each program site. Among the key questions that should be addressed by a process evaluation are the following:

In structuring a process evaluation of responsible fatherhood programs, specific evaluation questions could be broken down into the following categories: (1) program context, (2) program design and goals, (3) program implementation, (4) program components/services, (5) outreach, intake, and assessment, (6) client characteristics, (7) coordination/integration of services, (8) project staffing and staff development, (9) changes in outcomes, (10) program budget and costs, and (11) program replicability. Specific evaluation questions and potential data sources are displayed in Exhibit 2.1. An "X" in the column opposite an evaluation question indicates that the source could provide data helpful in addressing the specific question under the process evaluation.

We present a very comprehensive set of questions. The effort required to answer them all is substantial, as will become evident in the following section. The evaluator may need to narrow the scope of the questions in order to focus the process evaluation and reduce costs. The process of providing more focus needs to be carried out early in the project and requires input from the program, the evaluator, funders, and other stakeholders in the evaluation.

IV. Methods for Collecting Information

A process evaluation of responsible fatherhood initiatives should include both primary data collection and use of existing data sources. Primary data should be collected by interviewing individuals knowledgeable about the program's design, start-up, and/or ongoing operations. These interviews should be supplemented by the collection of participant-level data (through, if possible, an automated client information system) and a systematic review of existing client files and program documents. The sections that follow address the overall strategies and methods that can be employed in collecting both primary and secondary data.

A. Primary Data

To develop an accurate, objective, and comprehensive understanding of each responsible fatherhood program being evaluated, it is recommended that, at a minimum, evaluators conduct interviews with the following groups:

It is important to not only interview responsible fatherhood program administrators and staff who are currently with the program, but also individuals who may no longer be part of the program, but can provide insights on initial design and start-up of the program and a reference point for how the program may have changed over the years since its inception.

In the following sections, we provide a brief description of the types of information each of these groups is best suited to provide.

1. Responsible Fatherhood Project Director and Sponsoring Organization's Administrators

During our visits to some of the programs, we observed substantial cross-site differences in underlying program strategies and services. These differences stem from a number of factors, including: the basic philosophies of the organization's sponsoring the initiatives; the size and geographic distribution of the populations served; the funding streams and goals of funding organizations; local resources and the economic an policy environment; and a host of other factors. Sponsoring organizations' philosophies had a considerable effect on the design and day-to-day operations of programs that we visited.

The goals of two programs offer a contrasting example. One program's primary goal is to reconnect fathers with their children. Underlying this basic philosophy is the strong belief that reconnecting fathers to their children will lead to changes in attitude and behavior leading to paternity establishment, job placement, and improved relations with their children and the children's mother. The program's philosophy embraces the view that a father has the inner capacity to solve his own problems -- and, therefore, the role of staff is to assist him through the process of self-discovery. In contrast, a second program's primary goals are: to develop the capacity of young fathers to become responsible and involved parents, wage-earners, and providers of child support; and to assist fathers with developing the skills and behaviors necessary to cooperate in the care of their children, regardless of the character of the relationship with the mother. There is a strong emphasis on building the skills necessary for the father to be able to financially support his child. A primary goal of the program is to place fathers in jobs upon completion of the program's six-week curriculum.

Each program (and site) also is likely to draw upon staff and resources available through its parent organization (e.g., sites may use forms, curriculum, and information systems developed by the sponsoring organization). Hence, it will be important to interview the organization's executive director and/or administrator responsible for oversight of the responsible fatherhood program site. The discussion guide found in Appendix C provides a series of questions that will help to structure this interview.

The sponsoring organization's executive director (and/or other administrator) is likely to be knowledgeable about the history of the funding for the program -- why the organization submitted a proposal for a specific site, what was initially intended in the program's design, and (perhaps) reasons why the sponsoring organization was selected. He or she should be able to explain how the responsible fatherhood initiative fits into the overall organization mission and how this mission guides the responsible fatherhood strategies and specific services or activities. The executive director may be able to provide a chronology of the program start-up (if he or she was with the organization at the time the program started), including identification of barriers encountered during the project start-up (e.g., possible resistance within the community or from other human service agencies) and how these barriers may have been overcome. Finally, the executive director is likely to have an understanding of the program's budget and how funds are allocated to major program components.

The site's project director (i.e., the individual at the site responsible for day-to-day oversight and direction) is likely to have the most comprehensive knowledge of operations at the site. The project director should be able to describe virtually all aspects of the site's operations, including outreach and intake, case management, client flow, the structure of major program components/services, linkages with other service providers, and types of fathers and families served by the program. He or she is likely to have views on ways in which the program has or has not been effective in serving the target population. If the project director has been with the program since its inception, he or she should be able to identify barriers to implementation and ways in which these barriers were overcome. Finally, the project director will be able to identify other individuals who should be interviewed as part of the process evaluation.

2. Responsible Fatherhood Program Managers and Staff

Responsible fatherhood program staff (e.g., intake workers, case managers, counselors, group leaders, MIS specialists, and clerical staff) can provide further details about major program components and services, as well as impressions about how the specific program interventions appear to be affecting fathers and their families. For example, because of their daily interaction with fathers, staff probably have views about which fathers have (and have not) been participating in the program and why, what are the most common client needs, and which of these needs the program is (and is not) addressing. The staff will be able to provide details about the specific services they are delivering (e.g., needs assessment, individual and/or family counseling, job placement, education, legal services, and parenting skills) and may have views on whether and to what extent specific services have affected fathers and their families. They will also be able to provide details on the process by which participants are matched to particular services. Some staff, particularly those working directly with fathers, will be able to provide contextual information about the families served and the surrounding community. Appendix C provides a discussion guide that will be helpful in structuring discussions with program managers and staff. During these discussions, it is important to tailor questions to the specific program components or services in which staff have been involved.

3. Community Human Service Providers

Other community human service providers refers to private or public agencies providing services within the community that the responsible fatherhood program is operating and that are needed and/or utilized by participants or their families. These services include child support enforcement, education, health and mental health services, vocational training, legal services, and a wide range of other social services. Some may have been providing these services prior to the project's inception and others may be new to the community and only recently linked to the fatherhood program. There is also the possibility that there are other providers of responsible fatherhood services in the same community. The discussion guide included in Appendix C can be helpful in structuring interviews with officials at these other service providers, though this instrument will need to be tailored to each specific interviewee and according to the types of services being provided by the linked agency.

Other service providers are an important source of information about available services in the community. If these service providers work directly with the responsible fatherhood program and receive frequent referrals of fathers or members of their families from the program, they will have specific information about the fathers' needs and their willingness to follow-up on referrals for services. The providers may be able to offer opinions about the quality and comprehensives of the responsible fatherhood program's services, as well as views on strategies or interventions that appear most effective in reducing risk factors for fathers. Finally, other providers may be able to provide indications of how well responsible fatherhood program services have been integrated into the fabric of services at the community level.

4. Organizations Providing Funding and Oversight for the Responsible Fatherhood Program

As nonprofit human service agencies, the organizations operating responsible fatherhood programs are likely to have received funding through one or more other organizations, such as state and local government agencies, the United Way, or other non-profit organizations. These organizations are likely to have played -- to varying degrees -- roles in the development, implementation, and ongoing operation of the program. For example, in addition to funding, they may have some (even considerable) input on the program objectives, eligibility rules, definition of the target area for participants, overall program design and types of services provided. In addition, these funding organizations may provide technical assistance, training, and ongoing program monitoring.

Administrators and staff of the funding agencies should be able to provide a chronology of program development, including original program goals, how sponsoring agencies and sites were selected, and an overview of program start-up at each site. Staff at these agencies may also be able to provide insights into the variations across program sites (if multiple sites are funded) in terms of environmental factors (e.g., the community), sponsoring organization characteristics, types of fathers served, service delivery strategies, program components, and the relative effectiveness of the differing strategies employed by each site. The discussion guide included in Appendix C can be helpful in structuring interviews with administrators at funding and oversight agencies, though this instrument will need to be tailored to each specific interviewee.

5. Program Participants and Individuals Not Participating in the Program

During the process evaluation, evaluators should conduct semi-structured interviews with randomly selected fathers (and other individuals) who have and have not participated in responsible fatherhood program activities. In contrast to the more structured and larger sample surveys that might be conducted as part of the impact evaluation, these interviews should be less structured and should involve probing of participants and non-participant views on the responsible fatherhood program and its effects. If possible, participants and non-participants should be interviewed individually; if not, they should be interviewed in small focus groups (with 5 to 7 individuals). Interviews with participants could be structured using questions from the discussion guide found in Appendix C.

Participants should be asked about how they first heard about the responsible fatherhood program, why they decided to join and stay with the program, which activities have been most (and least) helpful, and what types of services they felt were missing but needed. They can also provide anecdotal information about their experiences with the program and how it has helped them to overcome problems. They may also be able to describe ways in which their family and other participants were (or were not) assisted by the program.

To supplement the information collected through interviews with participants, it would also be important to conduct interviews with the mothers of participants' children. Such interviews would provide valuable information about how the mother, children, and other family members may have been involved in and affected by services received through the responsible fatherhood initiative. Such interviews would also provide an interesting point of comparison with the perspectives of program participants (e.g., do the views of the father and mother coincide with respect to the effects of the program on the father's relationship with the children).

Another possible source of information would be participant self-evaluations that could be completed at various points in each participant's involvement in the program. For example, such self-evaluation could be completed at the end of receipt of a specific service (e.g., at the end of an eight-week parenting class) or at periodic points if a service is ongoing (e.g., at three-month intervals as an individual proceeds through one-on-one counseling). Participants could be asked to rate the quality of services received (e.g., on a five-point scale), the effects the services had on themselves and their families, and suggest ways in which services might be improved. Such information would be valuable both from the standpoint of evaluating the program and providing rapid feedback for improvement of individual program components. Inclusion of such information in the automated data system would be helpful to both program managers and the evaluator.

Non-participants may be able to provide additional background information on their neighborhood. They should be able to describe some of the types of problems they face at home and in their community. If they have heard of the responsible fatherhood program, they can also explain what they think it is, how it is perceived within the community and among other fathers, and why they are not participating in the program.

6. Community Leaders and Residents

Responsible fatherhood initiatives are expected not only to improve the lives of program participants, but also to affect their families and communities. As a result, it will be important to interview community members. Similar to interviews conducted with participants, interviews with community members should include many open-ended (versus close-ended) questions and probing of respondents. Some of these interviews (maybe one-third) should focus on community leaders (e.g., religious leaders, local politicians, members of local neighborhood associations, etc.). The other interviews should be with randomly selected members of the local community. Appendix C contains a discussion guide that illustrates some of the questions that could be asked of community leaders and/or residents.

In general, community leaders and residents should be able to provide contextual information about community problems and service needs. They may also have knowledge about other programs that exist or have existed in the community and reasons for their successes or failures. Interviews with community leaders and residents can also be useful for obtaining information on the extent of knowledge about the program and its objectives among community residents. Community leaders and residents familiar with the responsible fatherhood program may be able to provide some insights on how the program was implemented within the community and whether the program has had any demonstrable effects on participants, their families, and/or the surrounding community. Even if they are not aware of the program, community residents may be able to suggest ways in which the program can be more responsive to community needs.

B. Secondary Data

There are two major types of secondary data that should be collected as part of the process evaluation: (1) information that exists in program documents; and (2) data collected as part of a client management information system. These two sources of information are discussed below.

1. Existing Documents

Case Files: Responsible fatherhood program sites are likely to have a case file system in place, which includes a series of written forms for assessing and tracking program participants. The sites we visited in developing this evaluation design maintain a number of forms and written notes on each participant in their program. For example, in one program, a short (one page) intake form is completed usually during an initial in-home visit to a potential participant. This form captures some basic demographic data about the individual -- age, ethnicity, marital status, last grade completed, employment status, legal concerns, and several other items -- as well additional data about other family members (e.g., name, whether paternity has been established, relation, date of birth, and address/telephone number). Other forms used by this program focus primarily on establishing participant goals and action steps needed to achieve the goals, and monitoring progress toward the goals. These forms include mostly handwritten notes (and could not be entered into an automated data base, except perhaps in the form of a text file). The number of contacts and hours of counseling is maintained for each participant (on a daily and monthly basis). In addition to the forms described above, case managers and counselors maintain narrative notes within case files that document discussions with fathers and other family members (particularly during counseling and case management sessions) and recommended courses of action.

As part of the process evaluation, the evaluator should review a randomly-selected sample of case files at each site. The narrative notes maintained in case file records are revealing of both the wide variety of problems encountered by participants and the courses of action taken in response to problems by case managers and participants. A case file abstraction form might be used by the evaluator to systematically abstract (and analyze) client problems, recommended solutions, and determine the extent to which clients demonstrated improvement.

Statistical Reporting and Other Program Documents: Data on levels of program participation and service provision may be maintained by each site and submitted in the form of a monthly, quarterly, or annual progress report to funding agencies. Such reports may begin with a written summary of the site's program activities for the reporting period. The narrative portion (if one exists) is likely to provide a history (e.g., month-by-month record) of implementation experience at each site, including issues such as staff turnover, space constraints, and coordination problems.

The report may also provide statistical information on client characteristics and service delivery (e.g., monthly counts of the number of participants receiving counseling services). The progress reports should be collected and reviewed for each site in the evaluation. If they extend back before the evaluation, they can provide background on how the program evolved and changed over time, as well as a baseline of statistical data against which it may be able to analyze current service levels and outcomes.

However, because of likely changes in the reporting formats for statistical data (over time before the start of the evaluation effort) and a lack of consistency in the methods for collecting and reporting statistical data across sites, it is not recommended that the statistical portion of these reports be used as a source of data on program participation or services. In general, before the statistical portion of these progress reports could contribute to the process evaluation (e.g., to show trends in service utilization), a standardized reporting format is needed, along with regular quality control checks to make sure that standard definitions are being used across sites (e.g., what constitutes a participant or receipt of service by a participant). Quality control of report data is essential. If possible, the evaluator should design (during the design phase of the evaluation effort) and implement a standardized monthly progress reporting system (backed up by individual client records) that each site can use throughout the evaluation period.

In addition to progress reports, each program is likely to maintain (in varying degrees) other program documentation, such as their original proposal(s) for funding, directives from funding agencies, pamphlets and flyers, memoranda, and other planning documents. All of these may be helpful to the evaluator in describing the design, start-up, and ongoing operations of the program.

2. Client Forms and Management Information System (MIS)

A potentially valuable data source for the evaluation effort (as well as to support day-to-day program operations and reporting) is a comprehensive and valid automated system of client records. An evaluation of a responsible fatherhood program (as well as day-to-day operations of the program) can be greatly facilitated by the development of a comprehensive participant data system. It should be noted that such a data system could be developed prior to the initiation of a process evaluation and is an important management tool for programs to develop even if a process or impact evaluation is not undertaken. The sections below provide a suggested outline of an automated participant management information system (MIS). The discussion begins with a description of manual forms that might be completed by responsible fatherhood program staff. This is followed by a suggested model for an automated MIS that could be used by each site to track program participants. The system should be designed, to the extent possible, to: (a) minimize implementation costs; (b) minimize the burden of data collection and entry for site staff; (c) provide case managers with client level data for assessing client risks and long-term tracking of client caseload; (d) collect data that will permit objective analysis of client characteristics, risk factors, and outcomes; and (e) track types of services received by each client.

a. MIS Forms

To ensure high quality and complete data are collected on clients and to assist case managers in the delivery of services, a standardized set of client forms should be developed that tracks participants from the time of intake to the responsible fatherhood program to the time of exit and, if possible, beyond, for a year or longer. Examples of the types of forms necessary include the following:

Intake Form: An intake form should capture basic demographic characteristics and other relevant background information to be used by intake workers for eligibility processing and to begin developing a client record on each potential participant. This form should be completed during the client's first or second contact with the program. It should not be so long or burdensome that it is deterrent to participation in the program.

Assessment Form: An assessment form is both helpful for case managers and counselors in formulating strategies for assisting participants and for providing useful information for the evaluation of the responsible fatherhood initiative. This form should be completed when the individual is enrolled in case management services and during their first several contacts with the case manager.

Service Utilization Form: This form is used to track the services received by participants on a monthly or quarterly basis. It should be completed by case managers for each participant within his/her caseload.

Outcome Form: This form is used to document participant outcomes (e.g., establishment of paternity, completion of education or training programs, finding a job, etc.). Data should be entered onto this form periodically (e.g., quarterly) or at a minimum at the time the participant exits from the program.

Although the forms used by each program do not have to be identical across sites, it is strongly recommended that sites maintain at least a core of similar data on participants, their families, risk assessment, service utilization, and outcomes. Without some degree of conformity, it is difficult for evaluators to use MIS data to make relevant and valid comparisons across sites.

b. Suggested Design of a Client MIS

It will be necessary for the evaluator to work closely with the responsible fatherhood site(s) on the design, development, and implementation of a client MIS system. In a multi-site evaluation effort, it is recommended that an advisory committee be formed that would include representatives from each site included in the evaluation effort, the evaluator, and other personnel with expertise in PC-based data systems. This group should work collaboratively on the development of a system that will effectively meet the operational, reporting, and evaluation needs of all parties.

Because staff time and energy is expended on developing and maintaining the MIS (e.g., completing and entering client forms), it is imperative that they get some type of "return" for their efforts. For example, the system should assist case managers with both assessment and better tracking of participants, as well as reduce duplicative entry of data and manual counts for (monthly/quarterly) progress reports. Hence, the MIS should include a report generating capability that enables program staff to easily generate aggregate monthly statistical reports and other reports on clients to suit their needs.

Data Files and Entry Formats: Once there is agreement on a set of forms, it is necessary to design and test data files and data entry formats. There are a variety of different data base software packages that can be used to automate the system (e.g., DBASE, FoxPro). Whatever data base package is selected, it should be sufficiently transferable to other applications, such as software for conducting statistical analyses. If a multi-site system is developed, each site should be able to enter and edit data, sort/index data, delete individual records, and print out reports. The data structure and data entry screens should be set up so that they can be easily altered to customize the application for the sites at the time the system is installed, or to add new data elements or additional forms in the future. In addition, the system should be designed so that each site can create their own supplemental data files, which can be easily matched with the core MIS data file (using a unique client identifier, such as Social Security Number or a client ID Number).

Reporting: The report generating software used will depend on the software selected to operate the system. The report generating software should allow users to both print out aggregate (summary reports) as well as reports showing individual data on clients. This enables sites and the evaluator to monitor the quality of the client data files and to verify aggregate statistical reports submitted by sites summarizing the number and characteristics of fathers served, types of services provided, and outcomes. There are a number of low-cost and highly-flexible report generating programs available for this purpose.

Computer Hardware and Software: Sites may need to upgrade their existing computer hardware or software to operate the MIS. If needed, the evaluator should help with selection of equipment to ensure that it is compatible with the automated MIS application that is developed. In addition, the evaluator could assist sites with the purchase of statistical and/or graphics software that sites could use for their own analysis efforts.

V. Analysis and Reporting

The next two sections illustrate a potential approach to (a) describing and assessing results of a process evaluation of responsible fatherhood initiatives, and (b) how these process evaluation results and implications should be reported. In addition to supporting the overall evaluation effort, the results of the process evaluation could be used as feedback to assist sites in making operational changes to enhance program performance. For example, the process evaluation can be important in helping sites to draw from the experiences of one another and in providing helpful feedback on how to better target services on specific needs of program participants. The discussion below assumes a multi-site evaluation design. Process evaluations of several different programs for a multi-program evlauation alterations or for a single-site would involve many of the same sorts of analyses.

A. Analysis

There are several levels of analysis that should be conducted as part of the process evaluation. Analysis should begin with careful analysis at the site level, move on to comparisons across the responsible fatherhood sites, and conclude with a synthesis of the findings across sites. In a multi-site evaluation, it is important to document whether there are significant differences in the characteristics of the sites that might effect program outcomes. For a single-site evaluation it is also important to understand factors that will affect program outcomes, but it will not be necessary to determine how those factors differ across sites.

Descriptive Analysis of Each Site: The first step in analyzing data involves examining the responsible fatherhood program results at the site-level. Without a thorough understanding of each site's experience, the overall evaluation effort is likely to fail. Therefore, the analysis effort should begin with the development of a case study report on each site. The case studies should be based on client-level data, site visits and interviews, project-level documents and reports, and other sources of information on each included in the evaluation. Each case study should include a complete description of the project design, start-up activities, organization of the program, types of fathers and families served (and not served), types of services provided and the delivery system, and subjective assessments of the benefits and costs of the approach.

Comparative Analysis and Synthesis of Findings Across Sites: Once the site-level analysis is completed, the evaluator should conduct a comparative analysis across sites. The site-level analysis should provide much of the information that is necessary for both generating cross-site comparisons and for synthesizing results across sites. This type of analysis might include cross-site comparisons along the following dimensions:

For example, systematic comparisons of the characteristics of each of the sites included in the evaluation will be important. Areas of comparison across sites might include relative funding levels, types of services/activities provided, and outreach and recruitment efforts. For some characteristics (such as funding levels, participation, and date of initiation) it may be possible to make quantitative comparisons. In other areas -- for example, specific services offered to participants or problems encountered in program start-up -- the comparisons will involve more qualitative assessments. The assessments in this area should be rich in narrative comparing and contrasting the design features of the demonstration sites.

It should then be possible to compare the characteristics of program participants across demonstration sites. For example, comparisons can be made across basic demographic characteristics (e.g., age, race/ethnicity, marital status, levels of education achieved, etc.) and selected background factors that might affect participant outcomes in the program (e.g., past patterns of employment, use of illegal drugs and alcohol, criminal record, etc.). The evaluators can generate frequencies from both aggregate data submitted by each of the sites and client-level data collected in the MIS. The advantage of working with the client-level data is that it should be possible to analyze the types of fathers served by each site. For example, it should be possible for the evaluator to cross-tabulate age, race, and a variety of risk factors of participants to describe the types of fathers that have been served by each site.

Next, the analysis effort should assess the types of services provided and received by program participants. For example, comparisons might be made of the percentage of participants within each site that received each type of service. A further step, if data are available, might involve comparisons of the average hours of assistance received within specific types of service categories (e.g., individual and family counseling).

Finally, evaluators should examine gross outcomes for program participants. These analyses will set the stage (and provide some preliminary findings) for the more elaborate and controlled analyses planned under the impact evaluation. Cross-site analysis in this area should begin with a comparison of relative frequencies on a range of key outcome variables. Such an analysis should provide some clues about the impacts of programs, although it will have limited value in explaining whether successes or failures are the result of the types of fathers that are served, environmental factors, or the site-specific intervention. Analyses of participant outcomes could be done on a variety of measures, such as establishment of paternity, fathering new children, quantity/quality of interactions of fathers with children, changes in educational attainment, patterns of employment, incidence of incarceration and criminal activity, and use of alcohol and illegal drugs.

At this point, it may also be possible to begin to examine (as part of the process evaluation but leading to the impact evaluation) potential relationships that may exist between changes in participant outcomes and (a) participant demographic characteristics, (b) measures of participant risks, and (c) involvement in various interventions within each site. For example, it may be possible to conduct a cross-site comparison of paternity establishment by selected participant characteristics.

Some crude analyses of the costs of program services might also be possible based on data collected during the process evaluation. For example, it may be possible to compare the costs of providing specific types of services (e.g., counseling sessions), across responsible fatherhood sites, on a per-participant basis.

Overall, the comparative analysis, which is likely to depend primarily on qualitative assessments, frequencies, cross tabulations, and standard statistics (e.g., mean, median, ranges), should set the stage for more elaborate explanatory analyses that would be conducted as part of the impact evaluation. Results of the comparative analysis across sites must be interpreted with caution.

Responsible fatherhood program sites will be serving participants with varying characteristics in different economic, social, and policy environments. The process evaluation will not provide controls (e.g. comparison group data) sufficient to allow inferences about the impacts of the program on outcomes for participants.

B. Reporting

Two closely related final reports can be developed as a result of the process evaluation: a synthesis report and a case studies report. The first report can serve as an overall (process) evaluation of the responsible fatherhood sites studied, while the second report provides detailed case studies of each program site, which may facilitate the replication of successful aspects of responsible fatherhood initiatives in other locations.

The final synthesis report should provide full documentation on the study, including: an executive summary, objectives of the study, the evaluation methods used, analyses of interview and site visit information, analyses of site-level and participant-level data files, and related findings, conclusions, and recommendations. Several appendices (or working papers) might accompany this report -- for example, providing documentation on data bases developed, background on data collection instruments, and procedures used during the site visits. An important part of the final report should address the policy implications for future development of responsible fatherhood programs. Preliminary options for promoting effective strategies for assisting non-custodial fathers (and their children) should be developed. For each option identified, any restrictions on funding levels, required matching funds, or eligibility should be noted where applicable. In addition, study findings should be used to develop or support the options that are identified.

The second report -- case studies of each of the sites evaluated -- should convey the design, ongoing operations, delivery system, types of fathers served and not served, participant outcomes, and program costs. This report could provide a separate chapter on each responsible fatherhood site evaluated. Each case study should be structured similarly (although each may include different sub-sections) and organized so it can stand on its own (e.g., a case study of a site could be reproduced for dissemination as a stand-alone document).

VI. Conclusion

This chapter outlines a basic design for a process evaluation of responsible fatherhood programs. The design addresses a series of evaluation questions aimed at understanding, among other things, how the program was implemented, what assistance/services it provides, and who it serves. The design also considers changes in participant outcomes, which sets the stage for a more in-depth impact analysis. It is suggested that a process evaluation be started as early in the overall evaluation effort as possible. By starting early, evaluators will be able to begin to provide feedback to the sites so that they can better target and refine their service delivery. Early implementation of the process evaluation will also support the impact evaluation component by providing information that may be used to develop the methods for sampling, data collection, and data analysis.

It warrants repeating that the effort required to implement a full-fledged process evaluation is substantial. Given limited resources, an important first step is narrowing the scope of the evaluation questions. Selecting the questions that are most important to the stakeholders will provide focus to the data collection and analysis activities, and thereby reduce the resources required.


Return to Contents