Financial Aid Application Completion
TECHNICAL PAPER
Financial Aid Application Completion
Financial Aid Application Completion as a Practical Measure
Characteristics of Practical Measures | Descriptions of the Inclusive Classrooms Survey Measure |
Is closely tied to a theory of improvement | The Financial Aid Application Completion measure is used as a driver measure to monitor shifts in financial access, which is one of the three drivers included in the CARPE College Access Network’s theory of improvement. The theory is that improvement in Free Application for Federal Student Aid (FAFSA) application completion rate will contribute to improvement in college enrollment rate (i.e., percentage of students enrolled in tertiary institutions where they are most likely to graduate). |
Provides actionable information to drive positive changes in practice | By disaggregating the data by network, by school, by student — in some cases, by counselor and by targeted student group — the hub has transformed publicly available but hard-to-use data into accessible and actionable information for diverse stakeholders. It turns out that the aggregate dashboards are the most helpful for school leaders, such as assistant principals, to make data-driven decisions to create conditions that maximize the impact of change ideas targeting FAFSA application completion. Meanwhile, the by-student data is the most pivotal to the work of counseling team members, as they can use the information to prioritize and keep their efforts consistent with the Multi-Tiered System of Supports (MTSS) framework. |
Captures variability in performance | By building dashboards, the CARPE hub provides means for schools to understand variation among student groups and among individuals, as well as variation over time (e.g., 2019-20 data plotted against 2020-21 data). In addition, the CARPE hub visualizes the data aggregated by school using 1) small multiples ordered by the number of students in each school and 2) frequency-table-sorted based on percentage of application completers. These visuals capture variation across schools and allow the hub to be agile in strategizing coaching plans to meet the schools where they are at. |
Demonstrates predictive validity | The CARPE hub has built dashboards for both Financial Aid Application Completion and Verified College Enrollment (data pulled from the National Student Clearinghouse) and has examined how changes in one are related to changes in the other. The hub found that among schools that had the largest increases in FAFSA application completion in 2018-19, there were increases in college enrollment — more at two-year colleges than four-year institutions. This finding was promising, but given that not as much variance in college enrollment was accounted for by FAFSA application completion as expected, the hub concluded that focusing solely on FAFSA application completion was not enough to achieve the network’s aim. Therefore, the hub expanded the improvement work to other drivers like college application completion. |
Is minimally burdensome to users | One of many advantages of using extant measures like the Financial Aid Application Completion measure is that users do not have to spend extra time on collecting and documenting data. The time saved can be used in implementing social learning routines (e.g., data huddles) that facilitate knowledge consolidation and cross-pollination. |
Functions within social processes that support improvement culture | The CARPE hub encourages the schools to interact with the Financial Aid Application Completion data system at least bi-weekly so the measure is more of a tool for learning and less of a tool for evaluation. This aligns with the hub’s emphasis on building a network culture that prioritizes measurement for improvement over measurement for accountability. All data displays and routines are set up in ways that facilitate collaboration and not competition. |
Is reported on in a timely manner | The by-network and by-school data are updated weekly by the CARPE hub while the by-student data is updated bi-weekly by a data team lead from each school. The processes are highly standardized, and the frequency of data pulls may increase during critical periods, particularly in October and from January to March, as it is important to both start strong and end strong. Timely information can be provided to improvers because the hub invests time and resources not only in designing and testing data infrastructure and routines but also in building the capacity of network members to organize, visualize, and understand data so they can up their data game when the time calls for it. |
Question on Practical Measures Inspired by the Financial Aid Application Completion Measure
How might we deal with a lack of consensus over how something should be measured?
The improvement principle “make the work user-centered” is applicable here. For the CARPE College Access Network, the operational definition of the primary driver, financial access, is FAFSA completion, CalGrant awardance, and paying for college.
With such a definition, one would expect the measures of the driver to be straightforward. However, as the CARPE hub members dove into calculating the FAFSA completion rate, they realized that there was little agreement over which denominator should be used. Should it be the total number of senior students enrolled in a school? Or, should it only count the number of senior students who have expressed interest in going to college?
To address this challenge, the CARPE hub encouraged schools to submit their own calculations. The hub then used that data to calibrate the data pulled from the California Student Aid Commission Race to Submit Dashboard and the Department of Education websites. The advantage of using an approach like this is that it could potentially help elevate user voice and build trust in the process and the data, which is crucial to continuous improvement.
As the school-reported FAFSA completion rates started to go up, the CARPE hub needed to address another key improvement question: Were those changes really improvements? After auditing the changes using denominators consistent with publicly available enrollment data, the CARPE hub could confidently conclude that the changes they were observing were real improvements and not merely changes in the school-reported measure.
Related
BLOG
Financial Aid Application Completion
Practical measures aren’t necessarily about collecting more data or designing something totally new. Most school…
You may also be interested in…
RESEARCH BRIEF
Developing Exit Tickets in an Improvement Network
Ms. Johnson, an English teacher at an urban middle school, was concerned that her students were not doing as well in her class as they could be. Her district had recently…
RESEARCH BRIEF
Case Study of ORF as Practical Measure Brief
In this post we describe the identification and use of Oral Reading Fluency (ORF), a common and widely used Curriculum Based Measurement (CBM) measure, as a…