Selected Papers in School Finance 1995 (NCES 97-536) (2024)

There is no strong or systematic relationship between school expendituresand student performance. (Hanushek 1989, 47)

Relying on the data most often used to deny that resources are related to achievement, we find that money does matter after all. (Hedges, Laine, & Greenwald 1994, 13)

Introduction

On the surface, it would seem that the expenditure of more money for education would lead to improved student outcomes. Certainly if one were to ask a school teacher or school administrator, they would say that more money would help them provide a higher quality education, which in turn would lead to greater student achievement. Few in education would suggest differently, and it is rare indeed for a public school official to say that they could do more with less money, or even with the same amount of money.

Many important educational programs aimed at improving opportunities for groups of students with special needs are based on the assumption that additional resources are essential to their success. Compensatory education programs such as the Federal Title I/Chapter I program and funding for special education are the two largest examples of these programs. Recently, Clune (1994) suggested that to ensure children in poor schools have the opportunity to achieve at high levels, it might be necessary to provide as much as $5,000 per student in additional resources.

Despite this general belief that "money matters," the statistical evidence of a relationship between spending and student outcomes has been mixed. During the 1980s, Eric Hanushek conducted an in-depth analysis of education production function studies and concluded that there is little evidence to support the existence of a relationship between the amount of resources and student achievement (1981, 1986, 1989, 1991). Others, notably Murnane (1991), Ferguson (1991), and most recently Hedges, Laine, and Greenwald (1994), have questioned Hanushek's conclusion, arguing that money does in fact matter.

To date, this debate has focused on complex statistical analyses and detailed discussions about whether or not the production function approach is appropriate for estimating relationships between spending and student achievement. Much of the debate on the effect of resources on student achievement has been based on the production function approach. Yet as Monk (1992) points out, most efforts to define an educational production function have failed. Despite the inability to relate educational inputs to outcomes, the strong belief that money is important to improving school performance maintains a strong following (see for example, Murnane [1991]).

Recently a number of researchers, notably Picus (1994a) and Cooper (1993), have looked closely at how school districts and school sites use the dollars they actually receive. The most stunning conclusion from this work is the consistency in the pattern with which schools spend the funds they receive. Across the United States, schools spend approximately 60 percent of their resources on direct student instruction. This figure holds true regardless of how much is spent per pupil, and seems to be consistent across grade levels. These findings suggest that the effectiveness of new money on student achievement may be limited by the fact that these new resources are used in the same way as existing resources, limiting the potential effectiveness of those new dollars.

Given that the United States spent over $250 billion on K-12 public education last year, understanding the impact that money has on student outcomes is important. The purpose of this paper is twofold:

  1. To review existing studies that attempt to answer the question "does money matter?," and provide an objective analysis of the debate in terms of policy outcomes for policymakers; and
  2. To consider alternative approaches to answering the question of whether or not additional resources lead to gains in student outcomes.

To address these issues, the second section of this paper offers a brief discussion of the data on which this discussion is based, tracing the overall pattern of expenditures for elementary and secondary education in the United States during the last century, along with evidence on how student achievement has changed over time. The third section digs more deeply into the production function analyses on the effects of resources on student performance, focusing most of its attention on the current debate between Hanushek and Hedges, Laine, and Greenwald. This section also discusses a number of alternative approaches that merit consideration in attempting to ascertain whether or not there is a relationship between spending and student performance. Finally, the fourth section offers some conclusions and policy recommendations based on the data presented in the earlier sections of the paper.

General Trends In Expenditures And Student Outcome

Expenditure Trends

Hanushek (1994b) shows that real expenditures (inflation adjusted to 1990 dollars) for public K-12 education in the United States increased from $2 billion in 1890 to nearly $190 billion in 1990. He further points out that growth in expenditures for education was more than three times as fast as growth in the Gross National Product (GNP), with the result that K-12 education now represents some 3.6 percent of GNP in 1990 compared to less than 1 percent a century before. Hanushek also states that expenditures on education have grown faster than spending for health care. This is interesting given the intense debate over the growth in health care costs and the relatively little attention spending increases in education have received.

Picus (1994a) showed that real per pupil expenditures in the United States increased by nearly 70 percent during the 1960s, almost 22 percent in the 1970s, and over 48 percent in the 1980s. The total compound increase in educational expenditures between 1959-60 and 1989-90 amounted to 206 percent. Table 1 updates Picus' data, and shows that real per pupil expenditures increased by over 207 percent between 1959-60 and 1991-92. Spending on K-12 education represented approximately 2.8 percent of GNP in 1960, 4.0 percent in 1970, and 3.6 percent in both 1980 and 1990.

Table 1 also shows the dramatic variation in spending increases across the 50 states and the District of Columbia during that 33 year period. At the extremes, expenditures in New Jersey increased by over 410 percent, more than four times as fast as they did in Utah where the increase was just over 100 percent. Real per pupil expenditures in New Jersey were only $306 higher than they were in Utah in 1959-60. However, by 1991-92 that difference had grown to $6,277, and New Jersey spent three times as much per pupil as did Utah.

What has this additional money bought? The most obvious answer is more teachers. Barro (1992) estimated that teacher salaries account for 53 percent of all current spending by school districts. Moreover, he estimated that as districts receive additional funds, they spend approximately half on teachers, with 40 percent going to reductions in class size and 10 percent devoted to increased teacher salaries. To demonstrate the effect of this emphasis on reducing class size, the Digest of Education Statistics (NCES 1994) shows that nationally, the pupil/teacher ratio in public K-12 schools has declined from 26.9 in 1955 to 17.6 in 1994. Moreover, the pupil/teacher ratio declined every year but one between 1955 and 1990, and has hovered between 17.2 and 17.6 since 1990.

One reason for this decline is often thought to be the increase in the number of children with disabilities. Since these children are more difficult to educate, they often are enrolled in much smaller classes. Yet Hanushek and Rivkin (1994) show that special education programs account for less than one-third of the recent decline in the pupil/teacher ratio. This means that efforts to reduce regular class sizes have succeeded as well in most states despite limited evidence that smaller classes substantially improve student learning (see, for example, Glass and Smith [1979]; Hanushek [1986, 1989]; and Word et al. [1990]).

Of course if Barro's estimates are correct, then half of the average increase in spending goes to objects other than teacher salaries. One factor that is responsible for considerable growth in spending in recent years has been benefits paid to school personnel. Hanushek estimates that these so- called fixed charges grew from 7 percent to 14 percent of total spending between 1960 and 1980 (comparable data were not available for 1990). These increases are tied both to the increased number of teachers and other personnel, and to the growing costs of providing benefits such as health care and retirement. There are a number of other important functions that must be considered in the operation of a school system. Central administration, for example, only represents some two to three percent of total expenditures, while operations and maintenance account for approximately ten percent of educational expenditures. Table 2 provides a breakdown on how the more than 15,000 school districts in the United States allocated their funds in 1990-91.

Table 2 shows that school districts spent an average of 60 percent of their current funds on instruction. Research by Picus (1993a, 1993b, and 1994a) not only confirmed that figure, but shows in Table 3 that there is very little variation in that 60 percent figure, despite substantial variations in total expenditures per pupil, and in expenditures per pupil for directinstruction. This pattern has been confirmed by other researchers, most notably Bruce Cooper (1993, 1994). As discussed later in this paper, this lack of variation in the pattern of resource allocation may be part of the reason links between spending and student outcomes have been hard to find, and may well offer possibilities for making the allocation of resources more productive in the future.

Trends in Student Outcomes

Despite this substantial increase in spending for education, many observers have pointed out that student outcomes have not improved substantially, if at all. The results of standardized tests, international comparisons of student performance, and other measures of the outcomes of schooling like attendance rates and college enrollment have not shown the same kind of increase as has real per pupil spending. However, many argue that another important outcome of schooling is success in the labor market, and some have been able to link higher cost resource allocation patterns to improved career earnings. This section begins with a discussion of student performance as typically measured in education circles, focusing most heavily on the results of standardized tests. It concludes with a brief discussion of labor market outcomes and their relation to school spending.

Measures of Schooling. Despite the substantial increases in per pupil spending observed over the last 30 to 35 years, there has not been a commensurate improvement in student performance as measured by scores on standardized tests. One of the most commonly used measures of student performance is the average Scholastic Assessment Test (SAT) score. Figure 1 compares the average SAT score for high school seniors with the change in real per pupil spending between 1968 and 1993. This familiar figure is often cited in discussions of why more money will not lead to greater student outcomes. Equally well known are the arguments that discredit this analysis. They include the increase in the number of students taking the SAT, the higher numbers of minority students and the higher number of students for whom English is not their first language who are taking the exam. As a result, some argue that the trend in SAT scores is not only expected, but that it represents an improvement over the past.

Bracey (1994), for example, argues that the standards for the SAT were set in 1941 based on the performance of less than 11,000 students, 98 percent of whom were white, 60 percent of whom were male, and most of whom lived in the Northeast. Moreover, a very high proportion attended private high schools and intended to enroll in private colleges and universities. Bracey goes on to suggest that the bell curve imposed on this group led to only 6.68 percent of them scoring above 650 on the mathematics section of the SAT. Today, prior to the re-centering of the SAT scores, some 11 percent of the more than one million students taking the test score above 650 on the mathematics section of the test, despite the fact that 30 percent are minority, 52 percent female, and over 30 percent come from families with incomes less than $30,000 per year. However, only 3.3 percent score that high on the verbal portion of the test (Digest of Education Statistics,1994).

What is clear is that the SAT results do not provide a definitive answer to the question of whether or not the additional money spent on education leads to improved student outcomes. Moreover, because of the diversity in school districts, and the variation in the percentage of students taking the SAT across states, it is not possible to correlate average state or district SAT scores with per pupil expenditures.

Another way to measure trends in student performance is to review the results of the National Assessment of Educational Progress (NAEP) over time. Mullis et al. (1994) show the following trends in performance for students age 9, 13, and 17 in science, mathematics, reading, and writing:

  • Science. Performance in science declined for all three age groups in the 1970s, but improved during the 1980s. By 1992, science performance at age 9 was higher than it was in 1969-70, while it was lower for 17-year-olds and the same for 13-year-olds. Thus, only9-year- olds performed significantly better in science in 1992 than did 9-year-olds in 1969-70.
  • Mathematics. Proficiency in mathematics improved between 1973 and 1992 for those aged 9 and 13, while for 17-year-olds, proficiency declined in the 1970s and improved during the 1980s, returning to approximately the same level as observed in 1973.
  • Reading. For 9-year-olds, reading performance improved in the 1970s, declined in the 1980s, and by 1992, was at about the same level as the first assessment in 1971. Although there was little change in reading performance over time for 13-year- olds, over the 21 year period, average performance improved. For 17- year-olds, there were significant gains from 1971 to 1984, although reading performance has been flat since then.
  • Writing. Assessed by grade level rather than age, there has been little change in writing performance between 1984 and 1992 for 4th and 11th graders. Fourth graders showed a decline in 1990, and an improvement in 1992. On the other hand, 8th graders showed a drop in performance between 1984 and 1990, with a significant improvement by 1992. The size of the 8th grade gain is so large that many are questioning the results, and NAEP officials are taking a wait-and-see approach and waiting for additional assessments to be completed before jumping to any conclusions.

However one looks at these data, student performance in all four subject areas did not improve at the 22 percent rate of increase in spending during the 1970s, nor at the 48 percent increase of the 1980s.

There have also been a number of international comparisons of the performance of U.S. students with students in other countries. Stevens and Stigler (1992) compared the performance of students in Chicago and Minneapolis with similar aged students in Japan, Taiwan, and China. They show how poorly our students do compared to students in similar-sized cities in these countries. A number of other studies have shown children in the United States performing at lower levels than similar aged children in other countries.

These findings are somewhat controversial and highlight many of the difficulties involved in comparing student achievement across national borders. However, Bracey (1995) suggests that closer analysis of international test results shows that students in the United States do not do as poorly as we have been led to believe. He particularly points out that in the rankings of the Second International Assessment of Educational Progress, American 9-year-olds were ranked third in the world in science. Although the 14-year-olds were ranked 13th out of 15 nations, Bracey goes on to point out that in both cases the American students' scores were very close to the average of the entire sample, and that the outcomes by country were very closely bunched.

Despite the continued debate over the comparability of test scores over time in the United States, and across nations, it is clear that measures of student performance have not increased at the same rate as spending. In and of itself, this does not indicate that money does not matter. To deal with that question, it is important to consider whether or not systematic links between educational resources and student outcomes can be found. Before looking more closely at analyses of this complex question, it is helpful to consider the impact of educational spending on labor market outcomes.

  • Labor Market Outcomes. It can be argued that an important outcome of schooling is the ability of graduates to find and keep good, high paying jobs. While making the link between educational resources and employment (as measured by lifetime earnings or a similar measure) is difficult, on the basis of national data, Card and Kruger (1990) found that men who were educated in states with relatively small classes in the public schools and relatively high teacher salaries tended to have higher earnings than did men educated in states with relatively larger classes and relatively lower paid teachers. Murnane (1991) suggests that Card and Kruger's findings might lead to the conclusion that small classes and high teacher salaries (both of which would lead to higher per pupil expenditures), may have a greater effect on future earnings than on standardized tests. Murnane points out that this research can also be challenged, and that there may be other factors leading to the correlation between small classes and highly paid teachers, and expresses concern that many studies do not consider how expenditure levels impact behavior of teachers and students. While more study will be required to answer this question definitively, it is clear that the long term impact of educational spending decisions as measured by labor market outcomes cannot be ignored any more than can the short term results provided by standardized tests.

The question that remains is whether or not a systematic link between resources and student outcomes, no matter how defined, can be found. The next section of this paper summarizes the research that has been done in this area.

Production Function Analyses: What Have We Learned So Far?

The analysis presented above leaves some doubt as to whether or not money really matters in improving education. This is, and continues to be, a matter of considerable debate today. Most of the research on this issue focuses on production function analyses that attempt to relate educational inputs to schooling outcomes. This section of the paper provides a brief description of production functions, and discusses their use in educational research. This is followed by an analysis of the current debate over what existing research tells us about the effect of resources on educational attainment.

Production Functions: Their Use in Education

A production function is a model that identifies the possible outcomes that can be achieved with a given combination of inputs. With knowledge of the quantities of inputs available, it is possible to calculate the maximum output that can be achieved. What is important to this process is how the inputs are translated into those outcomes, and finding the most efficient way of doing so. The difficulty with identifying production functions in education results from the complexity of the schooling process and the number of inputs that can impact the outcome. In addition, it is often difficult to reach agreement as to the desired outcomes of the educational system. Moreover, many of the factors that appear to have an impact on the educational production process may well be outside of the control of educators.

Production functions are estimated through statistical or econometric techniques that rely on regression methods to measure the relationship between a mix of inputs and some identified output. Among the most common outcomes used in studies of educational production functions are the results of standardized tests, graduation rates, dropout rates, and as discussed above, labor market outcomes. The inputs most often considered include per pupil expenditures, pupil/teacher ratios, teacher education, experience and salary, school facilities, and administrative inputs. Unfortunately, the results of studies that have attempted to measure the effects of these inputs often conflict or show inconclusive results. Others, beginning with the well known Coleman Report (Coleman et al. 1966), have shown that factors such as students' socio-economic status may be more important in determining how well they do in school than are many, if not all of the inputs listed above.

Does Money Matter?: The Current Debate

While interest in the question of whether or not money matters has always been high, the publication of an article by Hedges, Laine, and Greenwald (1994) in the April 1994 Education Researcher has sparked a renewed debate over this issue. Prior to the publication of their article, the most often cited research in this field was the work of Eric Hanushek (1981, 1986, and 1989). In an analysis of data from 38 different articles and books containing 187 different regression equations, Hanushek focused on the effect of seven inputs to schooling. For each input, Hanushek analyzed the regression coefficients to determine if they were positive or negative, and whether the effect measured was statistically significant. He included a fifth category for those coefficients that were not statistically significant, and for which the sign on the coefficient could not be determined. His findings for each of the seven inputs are summarized.

  • Teacher/pupil ratio. Hanushek found a total of 152 studies that considered the teacher/pupil ratio. While it is generally accepted that smaller classes lead to higher student achievement, of the 27 studies that had statistically significant findings, only 14 found that reducing the number of pupils per teacher was positively correlated to student outcomes, while 13 found the opposite effect. Moreover, of the 125 that did not have statistically significant results, 34 found a positive effect, 46 a negative effect, and the sign or direction of the effect could not be determined for the remaining 45 studies.

Interestingly, despite the lack of statistically significant findings that lower pupil/teacher ratios lead to improved school performance, there has been a dramatic effort to reduce class size across the nation in the last 20 years. A cursory review of the most recent edition of the Digest of Education Statistics (NCES 1994) shows that the average pupil/teacher ratio for K-12 public schools in the United States was 17.6:1 in 1994. Moreover, the data provided in the Digest suggest that this ratio has declined consistently since 1955 when it stood at 26.9 pupils per teacher (NCES, 1994, p. 74). In fact, except for an increase of 0.1 pupils per teacher between 1961 and 1962 and again between 1980 and 1981, and some minor increases in the 1990s from the low of 17.2:1 in 1989 and 1990, the average pupil/teacher ratio across the United States has declined in every year since 1955.

Picus (1993a) found that district level pupil/teacher ratios declined as expenditures per pupil and expenditures per pupil for instruction increased. However, as the percent of expenditures devoted to instruction increased, a similar pattern did not emerge. Since expenditure data were not available at the school level in the national data bases, Picus (1993b) compared school level pupil/teacher ratios with district per pupil expenditures. He found that at the elementary, intermediate, and secondary school levels, there is a trend toward lower pupil/teacher ratios as expenditures increase.

  • Teacher education. In looking at teacher educational attainment, Hanushek found results similar to his findings for the pupil/teacher ratio. A total of 113 studies considered teacher education. Hanushek found that 8 of the 13 studies with statistically significant coefficients showed a positive effect of teacher education on student performance. The statistically insignificant studies were about evenly divided among positive, negative, and indeterminate findings.
  • Teacher experience. Hanushek found more positive correlation between teacher experience and student performance. However, he indicated that these results only appeared strong in relation to the other findings, and that they could be the result of more experienced teachers being able to select teaching assignments with "good students." (Hanushek 1993).
  • Teacher salary. Eleven of the 15 studies Hanushek identified with statistically significant results identified a positive relationship between teacher salaries and student performance. However, he argued that even this was not particularly strong evidence of a relationship given that teacher salaries are largely determined by teacher education and experience, and the underlying components of teacher salary were unrelated to student performance.
  • Per pupil expenditure. It isalso not surprising that Hanushek found per pupil expenditures did not play an important role in determining student performance. Specifically, he found that 13 of 16 studies with statistically significant results show a positive relationship. However, he discounted the importance of this arguing that 8 of the 13 came from one study which he felt did not measure family inputs precisely. As a result, Hanushek (1989) contends that school expenditures may have served as a proxy for family background in those studies.
  • Administrative inputs. Monk (1989) points out that if there were not a production function in education, then the impact of any input would not matter to the outcome. Therefore, some amount of administration is essential to the operation of the educational system. Hanushek (1989) concluded that administrative inputs did not have a systematic relationship to student performance, even though seven of the eight studies with statistically significant findings showed a positive effect. Hanushek argued that variations in how administrative inputs are measured undermined the findings.
  • Facilities. Adams (1994), Firestone et al. (1994), and Picus (1994b) show that when poor school districts in Kentucky, New Jersey, and Texas received substantial increases in funding as a result of school finance reforms in those states, a frequent response was to devote a substantial portion of those funds to facility improvements. This would imply that many educators believe the quality of school facilities are important to student learning. Yet Hanushek's (1989) analysis of facilities showed little relationship between the quality of school facilities and the performance of students.

After reviewing all 187 studies, Hanushek concluded that "There is no strong or systematic relationship between school expenditures and student performance." (Hanushek 1989, 47). These words have been cited often by those opposed to providing additional funds to the public schools. Opponents of increased funding argue that until educators can show more money will make a difference, additional funds should not be provided.

A recent review of these studies by Hedges, Laine, and Greenwald (1994) questions Hanushek's findings and suggests that money may in fact be more important in determining how well students are likely to do. Hedges, Laine, and Greenwald (1994) reviewed the same studies Hanushek considered in his analysis. They eliminated those studies that had insignificant results and the sign on the coefficient which could not be determined. They then analyzed the remaining studies which relied on statistical techniques other than the vote counting procedure used by Hanushek. They concluded that

These analyses are persuasive in showing that, with the possible exception of facilities, there is evidence of statistically reliable relations between educational resource inputs and school outcomes, and that there is much more evidence of positive relations than of negative relations between resource inputs and outcomes. (Hedges, Laine, and Greeenwald 1994, 11)

While Hedges, Laine, and Greenwald (1994) found that expenditures do matter, they found less evidence of a relationship between the other factors identified above and student performance. They suggest specific allocation of those resources may not be important in improving student performance in all situations. Further, they argue that local authorities should be given the discretion to spend funds as they think will best help the students for whom they are responsible.

Hedges, Laine, and Greenwald (1994) point out that if, for example, per pupil expenditures and student achievement were unrelated, half the studies would have positive coefficients and half negative coefficients. Moreover, they argue if there were no systematic relationship, only 5 percent of the studies would have statistically significant results. They then argue for the studies, where the direction of the coefficient could be determined, that a higher percentage of the coefficients showed a positive sign. In fact, the three authors argue this happened more often than would be expected by chance alone.

Hedges, Laine, and Greenwald also criticize the vote counting method used by Hanushek. They argue that as a procedure it has limited power in finding significant effects, and argue earlier work by Hedges and Olkin (1980) shows that as the number of studies reviewed increases, the probability that a vote count will correctly detect an effect decreases. Relying on a variety of analysis techniques, Hedges, Laine, and Greenwald (1994) re-analyzed Hanushek's study sample, and concluded that expenditures do have an impact on student achievement.

In a rejoinder, Hanushek (1994a) argues that the evidence still strongly indicates that there is no systematic relationship between how much money is spent and how well students perform. He asks, if money matters, but class size and teacher characteristics are not as important, what factors do in fact matter? Since teachers salaries and class size are the two largest determinants of spending, he suggests that if they are unimportant, other components of spending must be more effective in improving student performance. He suggests few would agree with the proposition that administration, another major component of school expenditures, is responsible for improving student performance.

In another study, Hanushek (1993) looked at the impact of spending on student performance in Alabama as measured by the percent of students passing standardized reading, mathematics, and language tests in the third, sixth, and ninth grades. Despite that fact that his analysis did not yield statistically significant results, he concluded that if his estimates were used and spending in each school district were increased to the level of the highest spending district in the state (some $5,113 per pupil, at a cost of $1.05 billion above the current spending of $2.4 billion), student performance would only be expected to improve by approximately 4 percent, at most. In one instance, grade 6 language performance, Hanushek actually predicted that the increased spending would reduce student performance by 0.2 percent.

These are not the only studies that have considered this question. A study by Ferguson (1991) looked at spending and the use of educational resources in Texas. He concluded that "hiring teachers with stronger literacy skills, hiring more teachers (when students-per-teacher exceed 18), retaining experienced teachers, and attracting more teachers with advanced training are all measures that produce higher test scores in exchange for more money." (485) His findings also suggest that teachers' selection of districts in which they want to teach is affected by the education level of the adults in the community, the racial composition of that community, and the salaries in other districts and alternative occupations. This implies, according to Ferguson, that better teachers will tend to move to districts with higher socioeconomic characteristics if salaries are equal. If teacher skills and knowledge have an impact on student achievement (and Ferguson, as well as others suggest that it does) then low socio-economic areas may have to offer substantially higher salaries to teachers to attract and retain high quality instructors. This would help confirm a link between expenditures and student achievement.

One of the problems with all of these studies is they don't take into consideration the tremendous similarity with which school districts spend the resources available to them. As described above, research by Picus (1993a, 1993b, and 1994a) and Cooper (1993, 1994) has shown resource allocation patterns across school districts to be remarkably similar, despite differences in total per pupil spending, student characteristics, and district attributes. This does not mean that all children receive the same level of educational services. As Picus (1994a) points out, a district spending $10,000 per pupil and $6,000 per pupil for direct instruction is able to offer smaller classes, better paid, and presumably higher quality, teachers, and higher quality instructional materials than is a district spending $5,000 per pupil and only $3,000 per pupil for direct instruction.

What we don't know is what the impact on student performance would be if schools or school districts were to dramatically change the way they spend the resources available to them. In 1992, Odden and Picus suggested that the important message from the research summarized above was that, "if additional education revenues are spent in the same way as current education revenues, student performance increases are unlikely to emerge." (Odden and Picus 1992, 281). Therefore, knowing whether or not high performing schools utilize resources differently than other schools would be very helpful in resolving the debate over whether or not money matters.

Picus and Nakib (forthcoming) looked at the allocation of educational resources by high performing high schools in Florida and compared those allocation patterns with the way resources were used in the remaining high schools in that state. A total of seven different measures were used to compare student performance. In preliminary findings, Picus and Nakib show per pupil spending and per pupil spending for instruction was not statistically significantly higher in high performing high schools, largely because of the highly equalized school funding formula used in Florida. On the other hand, they found the percent of expenditures devoted to instruction was lower in the high performing high schools, implying high performing high schools may actually spend more money on resources not directly linked to instruction than do other high schools.

Unfortunately, the results of this Florida analysis do little to clarify the debate on whether or not money matters. Comparisons of high performing high schools with all other high schools in Florida did not show a clear distinction in either the amount of money available or in the way resources are used. As with many other studies, it was student demographic characteristics that had the greatest impact on student performance.

Conclusion

This paper has shown that despite considerable research on the matter, there is still a great deal of debate as to whether or not money makes a difference in education. Even though everyone agrees that high spending provides better opportunities for learning, and seemingly higher student achievement, statistical confirmation of that belief has been hard to develop. It is clear that over the past 30 to 35 years, there have been dramatic increases in real per pupil revenues for K-12 public education. Despite the substantial cumulative increase, the annual average increase has averaged only 2-3 percent. Consequently, educators and community school boards have had little opportunity to consider how they would use large increases in funding. As a result, educational resource allocation patterns are remarkably similar regardless of spending level.

A careful look at the research on the impact of money on student achievement shows that we may be asking the wrong question. Rather than consider whether or not additional resources will improve educational spending, it seems more important to ask how additional resources could be directed to improve student learning, or in Hanushek's (1994b) view, spend those resources more efficiently.

Although the growth in educational revenues was flat in the early part of the 1990s, as the nation's economy recovers from the recession, we are beginning to see increases in educational spending again. However, the mood of the public, while still generous toward education, has changed. Now they seem to be insisting that schools show dramatic improvements in exchange for continued support. To meet this responsibility to both the taxpayers and school children, more research is needed to see if there are substantial differences in the way high performing schools utilize their resources compared to other schools.

What seems evident today is that although aggregate analyses of school resource allocation patterns leads to the conclusion that schools look remarkably alike, the needs of the students within those schools are vastly different. The needs of poor, and often limited English speaking students in our inner cities, are considerably different from those of middle and upper class children in well-to-do suburbs across the nation. Therefore, it seems that if our schools are to succeed in the future, it is important to provide local educators with the resources and tools they need to meet the specific needs of the children they serve, and at the same time allow them to design programs that are specifically targeted to those children.

If we can move away from measuring school accountability through the way funds are used, and instead measure accountability in terms of student outcomes, the answer to the question posed in this paper will become unimportant. It won't be whether or not money matters, but how that money is used that matters.

REFERENCES

Adams, J.E. 1994. "Spending School Reform Dollars in Kentucky: Familiar Patterns and New Programs, But Is This Reform?" Educational Evaluation and Policy Analysis 16(4): 375-390.

Barro, S.M. 1992. "What Does the Education Dollar Buy?: Relationships of Staffing, Staff Characteristics, and Staff Salaries to State Per-Pupil Spending." Los Angeles, CA: The Finance Center of CPRE, Working Paper.

Bracey, G.W. 1994. "The Fourth Bracey Report on the Condition of Public Education." Phi Delta Kappan (October): 76(2): 114-127.

Bracey, G.W. 1995. "The Right's Data-Proof Ideologues." Education Week (January 25): XIV(18): 48, 37.

Card, D., and A. Kruger. 1990. "Does School Quality Matter?: Returns to Education and the Characteristics of Public Schools in the United States." National Bureau of Economic Research, Working Paper No. 3358.

Clune, W. H. 1994. "The Shift from Equity to Adequacy in School Finance." Educational Policy 8(4): 376-395.

Coleman, J.S., E.Q. Campbell, C.J. Hobson, J. McPartland, A.M. Mood, F.D. Weinfield, and R.L. York. 1966. Equality of Educational Opportunity. Washington, DC: U.S. Department of Health, Education and Welfare.

Cooper, B. 1993. "School-Site Cost Allocations: Testing a Micro-Financial Model in 23 Districts in Ten States." Paper prepared for the Annual Meeting of the American Education Finance Association in March, 1993.

Cooper, B.S. 1994. "Making Money Matter in Education: A Micro-Financial Model for Determining School-Level Allocations, Efficiency, and Productivity." Journal of Education Finance 20(1): 66-87.

Ferguson, R.F. 1991. "Paying for Public Education: New Evidence on How and Why Money Matters." Harvard Journal on Legislation 28: 458-498.

Firestone, W.A., M.E. Goertz, B. Nagle, and M.F. Smelkinson. 1994. "Where Did the $800 Million Go? The First Years of New Jersey's Quality Education Act." Educational Evaluation and Policy Analysis 16(4): 359-374.

Glass, G.V., and M.L. Smith. 1979. "Meta-Analysis of Research on Class Size and Achievement." Educational Evaluation and Policy Analysis January-February: 1(1): 2-16.

Greenwald, R., L.V. Hedges, and R.D. Laine. 1994. "When Reinventing the Wheel is Not Necessary: A Case Study in the Use of Meta-Analysis in Education Finance." Journal of Education Finance 20(1): 1-20.

Hanushek, E.A. 1981. "Throwing Money at Schools." Journal of Policy Analysis and Management 1: 19-41.

Hanushek, E.A. 1986. "The Economics of Schooling: Production and Efficiency in Public Schools." Journal of Economic Literature 24: 1141-1177.

Hanushek, E.A. 1989. "The Impact of Differential Expenditures on School Performance." Educational Researcher 18(4): 45-65.

Hanushek, E.A. 1991. "When School Finance `Reform' May Not be a Good Policy." Harvard Journal on Legislation 28: 423-456.

Hanushek, E.A. 1993. "Can Equity be Separated from Efficiency in School Finance Debates?" Essays on the Economics of Education W.E. Upjohn Institute for Employment Research. Ed. E.P. Hoffman. Kalamazoo, MI.

Hanushek, E.A. 1994b. Making Schools Work: Improving Performance and Controlling Costs. Washington, DC: The Brookings Institution.

Hanushek, E.A., and S.G. Rivkin. 1994. "Understanding the 20th Century Explosion in U.S. School Costs." Rochester, NY: Rochester Center for Economic Research, Working Paper No. 388.

Hanushek, E.A. 1994a. "Money Might Matter Somewhere: A Response to Hedges, Laine, and Greenwald." Educational Researcher 23(4): 5-8.

Hedges, L.V., R.D. Laine, and R. Greenwald. 1994. "Does Money Matter? A Meta-Analysis of Studies of the Effects of Differential School Inputs on Student Outcomes." Educational Researcher 23(3): 5-14.

Hedges, L.V., and I. Olkin. 1980. "Vote Counting Methods in Research Synthesis." Psychological Bulletin 88: 359-369.

Laine, R.D., R. Greenwald, and L.V. Hedges. 1994. "The Use of Global Education Indicators: Lessons Learned From Education Production Functions, Lesson #1: Money Matters." Paper presented at the annual meeting of the American Education Research Association on April 4, 1994 in New Orleans, LA.

Marsh, D.D., and J.M. Sevilla. 1992. "Financing Middle School Reform: Linking costs and Education Goals." in Rethinking School Finance: An Agenda for the 1990s. Ed. A.R. Odden. San Francisco, CA: Jossey-Bass. pp. 97-127.

Monk, D.H. 1989. "The Education Production Functions: Its Evolving Role in Policy Analysis." Educational Evaluation and Policy Analysis 11(1): 31-45.

Monk, D. 1992. "Educational Productivity Research: An Update and Assessment of Its Role in Education Finance Reform." Educational Evaluation and Policy Analysis 14(4): 307-332.

Mullis, I.V.S., J.A. Dossey, J.R. Campbell, C.A. Gentile, C. O'Sullivan, and A.S. Latham. 1994. Report in Brief: NAEP 1992 Trends in Academic Progress Achievement of U.S. Students in Science, 1969 to 1992, Mathematics, 1973 to 1992, Reading, 1971 to 1992, Writing, 1984 to 1992. Washington, DC: U.S. Government Printing Office. Report Number 23-TR01.

Murnane, R.J. 1991. "Interpreting the Evidence on "Does Money Matter?" Harvard Journal on Legislation 28: 457-464.

Nakib, Yasser. 1995. Resource Allocation and Student Achievement at the School Level: What Seems to Matter? Los Angeles, CA: Center for Research in Education Finance.

U.S. Department of Education, , Digest of Education Statistics, 1994. NCES 94-115. Washington, DC. Odden, A. 1990. "Class Size and Student Achievement: Research Based Policy Alternatives." Educational Evaluation and Policy Analysis (Summer 1990): 12(2): 213-227.

Odden, A. 1994. "The Local Impact of School Finance Reform in Kentucky, New Jersey and Texas." Educational Evaluation and Policy Analysis 16(4).

Odden, A.R., and L.O. Picus. 1992. School Finance: A Policy Perspective. McGraw-Hill. New York, NY.

Picus, L.O. 1993a. "The Allocation and Use of Educational Resources: District Level Evidence from the Schools and Staffing Survey." Los Angeles, CA: The Finance Center of CPRE, Working Paper Number 34.

Picus, L.O. 1993b. "The Allocation and Use of Educational Resources: School Level Evidence from the Schools and Staffing Survey." Los Angeles, CA: The Finance Center of CPRE, Working Paper Number 37.

Picus, L.O. 1994a. "The $300 Billion Question: How Do Public Elementary and Secondary Schools Spend Their Money?" Paper presented at the 1994 Annual Meeting of the American Education Research Association. New Orleans, LA.

Picus, L.O. 1994b. "The Local Impact of School Finance Reform in Four Texas School Districts." Educational Evaluation and Policy Analysis 16(4): 391-404.

Picus, L.O. and Y. Nakib. Forthcoming. "Resource Allocation Patterns in High Performing Florida Schools." Los Angeles, CA: Center for Research in Education Finance.

Stevens, H.W., and J.W. Stigler. 1992. The Learning Gap. Simon and Schuster. New York, NY.

Word, E., J. Johnston, H.P. Bain, D. Fulton, J.B. Zaharies, M.N. Lintz, C.M. Achilles, J. Folger, and C. Breda. 1990. Student/Teacher Achievement Ratio (STAR), Tennessee's K-3 Class Size Study: Final Summary Report, 1985-1990. Nashville, TN: Tennessee State Department of Education.

Selected Papers in School Finance 1995 (NCES 97-536) (2024)
Top Articles
Latest Posts
Article information

Author: Gregorio Kreiger

Last Updated:

Views: 6335

Rating: 4.7 / 5 (57 voted)

Reviews: 88% of readers found this page helpful

Author information

Name: Gregorio Kreiger

Birthday: 1994-12-18

Address: 89212 Tracey Ramp, Sunside, MT 08453-0951

Phone: +9014805370218

Job: Customer Designer

Hobby: Mountain biking, Orienteering, Hiking, Sewing, Backpacking, Mushroom hunting, Backpacking

Introduction: My name is Gregorio Kreiger, I am a tender, brainy, enthusiastic, combative, agreeable, gentle, gentle person who loves writing and wants to share my knowledge and understanding with you.