Compact Development Guidance: Compact Development Guidance | February 2017

Monitoring and Evaluation of MCC Compact Projects

Immediately following a country’s compact eligibility determination, MCC’s Monitoring and Evaluation Division will begin due diligence on the proposed investments, with a particular engagement with early stage concept notes by project leads. Below are examples of questions which are pursued during the due diligence phase. The list of specific questions may be edited if appropriate.


Goals and Objectives

  • Are the goals and objectives measurable and clearly articulated?
  • Are beneficiaries quantified and demographic characteristics described?
  • Is Program design logically related to the Goals?

Indicators to measure progress towards Program Goals (poverty reduction measure)

  • Are the indicators consistent with economic analysis?
  • Are the indicators credibly linked to Goals and Objectives?
  • Are the indicators easy to understand?
  • Are the indicators consistent with those used by other actors involved (if relevant)?

Indicators to measure progress towards intermediate outcomes

  • Do the intermediate indicators track progress at compact objective and individual activity levels?
  • Can the intermediate indicators be reliably measured and can data be cost-effectively collected (relative to information value)?
  • Are the intermediate indicators limited in number so as to include only the most crucial indicators?
  • Are the intermediate indicators consistent with economic analysis?
  • Are the intermediate indicators credibly linked to poverty reduction indicators described above?
  • Are there some indicators can be used to condition disbursements?
  • Are the intermediate indicators consistent with those used by other actors involved?

Data Source

  • Does the M&E plan use existing data sources?
  • If new data collection is required, is there a broader use beyond MCA program monitoring?
  • Is baseline data available? If not is there an agreement on the plan to collect baseline?
  • Which data collection agency will be responsible for tracking each indicator? What is the measurement unit, method, and frequency of data collection?
  • Will data be disaggregated by gender, income group, age, or ethnicity where relevant?

Targets for all indicators or plan to establish targets

  • Are targets consistent with the economic growth analysis (as applicable)?
  • Are there annual and final targets?
  • Do targets take into consideration natural rate of growth (as applicable)?

Reporting Schedule

  • Is the reporting schedule consistent with planned disbursements?
  • Is there a plan for public dissemination of program performance?


Assessing an investment’s evaluability

  • Is the problem clearly understood and identified?  Are there baseline data available for the project which makes the case for this problem?
  • Is the program scope and casual logic clearly defined, linked to economic analysis and program goal?
  • Are the program results plausible and measurable within a reasonable time frame?
  • Are the beneficiaries and location(s) of impact clearly defined?

Determining the ideal evaluation method

Three key questions help determine the relevance of an impact evaluation methodology:

  • Need: Would future MCC investments be informed by the results of a potential impact evaluation?
  • Might the results of an impact evaluation be relevant to future MCC investment?
  • Is there a lack of rigorous literature which answers the questions being asked during compact development?
  • Cost-effective Learning: Would an impact evaluation yield information whose added value to policymakers justifies the cost?
  • Can relevant impact data be obtained at a reasonable cost?
  • If data collection costs are relatively high, will the data be beneficial in other areas of the compact (other evaluations, monitoring uses, etc.) or are they narrowly useful for the impact evaluation?
  • Are stakeholders interested in answering the evaluation questions of an impact evaluation?
  • Is there a project stakeholder willing to champion the impact evaluation?
  • Feasibility: Is a contracted impact evaluation likely to lead to credible, actionable findings?
  • Can a statistically valid counterfactual be used to isolate and attribute results to the project?
  • Is the “control group” a reasonable counterfactual of what might occur without this MCC intervention?
  • Does the rigorous design allow for adaptation of the project according to the changing needs of the project? If not, have the specific restrictions in the design (e.g. control group preservation, cutoff selection mechanism in RDD, treatment replacement plans, etc.) been communicated to the project team?

In the event that the above questions indicate a low probability of completing a useful, cost-effective impact evaluation for the activity in question, an independent performance evaluation should be planned.

  • Has a particular performance evaluation methodology been determined?  (See Performance Evaluation guidelines for more information)
  • Is there quantitative data which would give an independent evaluator leverage by which to evaluate an activity’s outcomes?
  • Are performance criteria (including program logic) stated in official documents and agreed upon by project leads?

Evaluation implementation

  • Will the independent evaluator be contracted by MCA or directly by MCC?
  • Is there a tentative timeline for evaluation activities during years 1 to 5 of the compact?
  • Does the implementation timeline take into account procurement requirements (plans for contracting a design/implementation evaluation firm/individual and any data collectors)?
  • What data will be used for evaluation (include source, frequency of collection, and party responsible for contracting and managing data collection)?

Other M&E planning guidelines

Data Quality Reviews

  • Are there procedures for assuring data quality?
  • Are data quality reviews timed to address capacity issues early in compact term with regular reviews throughout compact term?

Assumptions and Risks

  • Do assumptions include factors that influence the projected benefits of the program, but are not directly addressed by the program?
  • Are assumptions and risks consistent with economic growth analysis?
  • Is there a plan to mitigate risks where feasible?

Multi-year M&E budget

  • Does the M&E costs estimate include funding in compact and any direct MCC M&E funding to be used?
  • Do line items take into account all potential costs and contingency costs?

Staffing Plan

  • Are there descriptions of the roles and responsibilities of the M&E director and other staff?
  • Is there a staffing plan that describes the role of in-house staff, outside consultants, independent evaluators?
  • Has a country counterpart for economic analysis and M&E been hired/appointed?
  • Does the counterpart have a strong Economics/Statistics background?

Plans for making M&E reports and evaluations publicly available

  • What are the plans for making M&E reports available on the country website?
  • Is there a need for other methods of dissemination?
  • How will civil society, advisory groups, and/or beneficiary groups be involved?

Country capacity to implement the M&E Plan

  • Is there capacity in country to implement M&E Plan?
  • Has this been confirmed by other donors or qualified entity?
  • Are there plans and budget for providing technical assistance to address current or future weaknesses in implementing the M&E Plan (as needed)?

Working Sessions

  • Do Country team and MCC team agree on the economic and program logic (links between the program components and poverty reduction objectives)?
  • Was there broad participation in the economic analysis and the M&E Plan (especially by potential implementers and technical specialists)?
  • Is there a plan for stakeholder or beneficiary consultation?

Other donors involved in statistical capacity building and data collection

  • Where relevant, is performance measurement coordinated and/or consistent with that of other actors?
  • Are MCC funded activities are consistent with national plan for statistics?
  • Are capacity-building efforts are coordinated or consistent with other actors?