Implement Performance Management in Grants

Why it Matters

Effective performance management improves a grant program’s overall outcomes by serving as a means through which state education agencies (SEAs) can:

  • Monitor progress towards goals.

  • Provide feedback to grantees. 

  • Support continuous improvement and adjust grant activities where a grantee or the grant program itself is not on track to meet its goals.

SEAs can engage in performance management at the individual grantee level and can also look across grantees to understand the degree to which the grant program is meeting the SEA’s strategic goals.

The Harvard Kennedy School Government Performance Lab (GPL) notes that a data-driven performance management framework is one that accounts for:

  1. The collection and use of data to understand performance.

  2. Opportunities to develop insights about what is working and not working.

  3. A commitment to taking action to improve outcomes, based on those data and insights.

A key component of performance management is establishing performance measures, or the key data points that indicate how a grant program is working, relative to its goals. SEAs should align their performance measures with the desired outcomes they identified as priorities for the grant program.

The federal Performance Improvement Council’s (PIC) resource on Performance Measures Basics describes seven key considerations when developing performance measures.

  • Meaningful: They reflect desired outcomes and provide useful information to enable decision making.

  • Measurable: They are quantifiable and objective. Data is available and can be collected in a cost-effective manner.

  • Responsive: They link to inputs that can be controlled and adjusted, and data can be compared over time to reveal trends.

  • Easy to Understand: They are intuitive and easily digested by their users without much explanation.

  • Incentivize the Right Behavior: They prompt organizations and individuals to take actions that are consistent with long-term goals.

  • Less is More: They are limited in scope and focused on what an organization specifically wants to improve. Trying to measure everything can distract attention from what is most important.

  • Visible and Transparent: Everyone involved is aware of the measures and held accountable to them. Such transparency helps everyone keep their eye on the ball and understand their contribution to the overarching performance of the organization.

Traditional performance management might focus on ensuring compliance. However, SEAs can design performance management systems and processes that prioritize collaboration between the SEA and grantee and ensure a focus on learning and continuous improvement.

To develop a collaborative approach to performance management that focuses on continuous improvement, SEAs can:

  • Set shared expectations about performance management during the application phase

  • Ensure there will be regular communication points with grantees and other key stakeholders throughout the grant program.

  • Analyze performance measures throughout the grant program.

  • Identify trends and challenges early on and in real-time.

  • Rapidly and continually problem-solve and course correct, in partnership with grantees.

  • Identify opportunities to rethink and improve the grant program on a long-term basis.


Performance Improvement Council (PIC) also identifies four common mistakes in developing performance measures.

  • Too many measures: Organizations that measure everything may end up understanding nothing because insights get diluted in the noise.

  • Misaligned to strategic priorities: Organizations that cannot link their performance “gauges” to their final destination are far less likely to arrive at their destination on time, intact, and on budget.

  • Nothing under the hood: Many organizations create measures without corresponding data collection, calculation, or reporting methodologies. These methodologies are essential for implementing the measures successfully.

  • Unrealistic: Sometimes the most well-intended measures are simply outside the possible scope of data collection. Beware of measures that sound great in theory but are much harder to capture in practice.


A grant program’s evaluation plan and performance management plan may not completely overlap; however, SEAs can strive for coherence across the two plans, so that the performance management plan is generating information that can be used in service of both formative and summative evaluation plans.

Questions SEA leaders can ask themselves as they consider coherence include:

  • In what ways are the purposes behind the performance management and evaluation plans similar and in what ways are they different? What do the similarities and differences suggest about the ways in which you can align the implementation of each?

  • Will grantees be asked to collect and report different data points for the performance plan and grant evaluation? If so, what is the rationale for the difference and are there ways to ensure that the data grantees are collecting can be used for both purposes?

  • Will performance measure data that is collected via the SEA’s grant management system be available to the grant evaluator? Is the data being collected in a way that the SEA and evaluators will be able to connect the performance measures data to data collected by the state in other systems?

  • Are the performance management and evaluation plan timelines aligned in ways that will benefit both?