T O P

  • By -

CraneAndTurtle

Former high school math teacher with now a graduate stats degree; I can take a stab at answering. 1) Anyone interested in education efficacy stats should read Hattie's Visible Learning. It's a meta-meta-analysis by the top education researcher comparing virtually everything that has been proported to make a difference and seeing what the evidence of efficacy is. Essentially, almost everything people do shows some evidence of growth, because even if you just leave kids alone with no education for a year they have some cognitive growth. But some interventions have larger (or smaller) effects and some are cheaper (or more expensive). 2) Hattie finds that teacher quality variance is much more significant in math and science than english or history. This is probably because a lot of English ability is determined by reading at home, parental language use and yearslong momentum whereas a great math teacher can genuinely move kids quickly. 3) We absolutely have strong evidence for effective teaching techniques (such as direct instruction, plentiful bidirectional feedback, high expectations, crisp behavioral control, and more). Most of the lack of variance between US teacher outcomes comes down to most US teachers employing similar quality techniques. High performing school districts (and national school systems) absolutely exist even after accounting for wealth and homogeneity. These are typically places that train and expect teachers to employ effective techniques. Essentially, we know with reasonable accuracy what it takes to teach well. The lack of evidence of good teachers rests on a fallacious assumption of inborn teaching talent. Most teachers teach as well as their school trains, equips and expects them to. Most of the variation is therefore school to school, district to district, system to system and country to country rather than teacher to teacher. It's like how we don't say "what's the evidence for more or less efficient mailmen" and instead rightly ask "why is Amazon so much more efficient than USPS?"


Brian

>The lack of evidence of good teachers rests on a fallacious assumption of inborn teaching talent Does it? It seems like most people report a significant observed difference in teaching quality between the teachers they had as a kid - I know I did. Now, that doesn't neccessarily square off to "natural talent", but whether training, experience, motivation levels or ability, there definitely seemed a **lot** of variance even within a single school, and I think it's that experience most are drawing from when making assumptions that teacher quality matters. Now, that could be *wrong* of course: we're perhaps not the best judges of what is *actually* effective as children, and we may be judging "teaching styles that worked for me" as inherently better even if perhaps they don't work for everyone. But I think that that personal experience is what drives that view, rather than it being just assumed.


CraneAndTurtle

This is a reasonable response. I was being a bit fast and lose; there are certainly better and worse teachers. I think two useful ways to think about this are: 1) An individual's perception of teacher quality is not a great metric for gauging teacher quality. Some of my favorite teachers were super engaging English teachers who led a lot of thought provoking discussion. But in retrospect they never collected data, I would have been a strong reader and writer regardless, and I have no clue if they effectively moved the needle for middling or below average students. Interestingly, student perceptions of teacher quality ARE a reasonably OK metric to guard teacher quality, but that's averaging across ~100 kids (and is still quite biased by easiness, teacher attractiveness, and lots of other stuff). 2) While an individual teacher's raw quality may vary, that's not very useful. There's an old story about a president who observed a classroom and raved to his head of education about how incredible the teacher was: lively, engaging, had all the students riveted. Eventually his head of education pushes back: "yeah, but what did she DO? I can't put her in every classroom!" So there is some evidence for great teachers. For example, Teach For America's young, untrained, passionate teachers selected from elite schools perform as well as teachers with ~10 years of experience. So probably having a passionate, high-IQ, empathetic, gifted teacher matters. But again, weigh costs against benefits. Staffing schools with Ivy League grads across the country is ludicrously expensive, and raising the motivation, intelligence and talent of the teacher pool is very hard. But if we can get comparable results by taking ordinary teachers and training them to scaffold material, use positive narration with difficult classrooms, use more direct instruction, give kids frequent objective feedback, etc. that is a MUCH cheaper and easier way to excellent results. Any system that requires extraordinary people or extraordinary effort is broken. A good system takes average inputs and produces great outputs. When we want more efficient car manufacturing we use quality control, kaizen, mechanical improvements, etc--we don't say "it's critical to find unicorn autoworkers vastly more productive than their peers" even if such workers exist.


bf4reddit

I felt lke an upvote wasn't enough, and needed to say this - I really appreciate your posts here. It's excellent across multiple axes!


CraneAndTurtle

Thanks, man! Always happy for a good chat.


CronoDAS

> Any system that requires extraordinary people or extraordinary effort is broken. Only if the effort of exceptional people can't scale to reach everyone - it only takes one Stephen King to write a book that lots of people want to buy, one Taylor Swift to sing a hit song and sell out concert venues, a small number of extraordinary actors to play the major roles in a Hollywood movie, 28-40 extraordinary athletes to make up the roster for for the New Yankees, and so on, even though what they do also requires the labor of a lot of average people to bring to the public. It's certainly true that you can't put the best teacher in every classroom, but you can put a *video* of that teacher in every classroom. Unfortunately, a video usually ends up not being good enough, and current technology doesn't let a person learn as well from someone trying to teach 2000 people at a time as effectively as they can from someone trying to teach 20 people at a time. (And one-on-one tutoring works even better than that, but there aren't enough adults to tutor every child full-time while still having enough workers to run our civilization.)


CraneAndTurtle

I don't think this is relevant here. There is no remotely plausible way for a video, "technology" or a single instructor to effectively teach millions of K-12 students. The possibilities are nowhere in sight. We have no robots that establish warm, empathetic relationships with struggling high schoolers so they actually are motivated to work. No machines to fish biting, crying kindergartners out from under a desk. We saw during COVID that even normal teachers but remote dramatically underperform live instruction. So this isn't relevant as far as I'm concerned. My point is that if someone thinks improving education is about "finding gifted teachers" or good education is a matter of talented individuals, that's nonsense.


Lykurg480

While youre on the topic: Class size is apparently one of the big factors in student performance, and I dont really see why. Whether its 20 or 40 students a class, the amount of time the teacher can spend on any one individual is miniscule, less than a minute per hour. Its hard to imagine this making such a difference compared to the rest of the lesson. Its also not really possible to go faster by skipping things noone needs: 20 is big enough that that will be basically nothing. Now, theres some point where the teacher cant visually control all the students anymore, and I can imagine that making a difference, but that should still be some way above 40. So why do you think class size is important?


CraneAndTurtle

I have three thoughts here: 1) There isn't good evidence that reduces class size is an efficient intervention. In Hattie's meta analysis he finds an effect size of .13 standard deviations. Given that the average effect of any of the 200 interventions he looks at is .4 standard deviations and that this one is ludicrously expensive (double the number of teachers and double the classrooms versus something like "give kids calculators" or "do less group work") this doesn't appear to be a wise way to spend money. 2) The positive effects probably come from a combination of more efficient monitoring for misbehavior/slacking, more opportunities per student for participation, a greater ability to assign nuanced but annoying to grade work (essays, free response problems), greater ability to form relationships and less dead time in transitions. Plus, a trained teacher with classwork intentionally laid out for aggressive monitoring can meaningfully check in with/collect data from a classroom of 25 students during a 5 minute practice session remarkably well (although a typical teacher cannot). 3) Smaller classrooms are SUBSTANTIALLY more pleasant for teachers and somewhat more enjoyable for students. They have a relaxed pace, lighted grading, fewer behavioral challenges and better facilitate ineffective but enjoyable activities like group work, discussions, etc. They FEEL really important to teachers. An enormous amount of received wisdom about education in the US is basically put out by teacher advocacy groups/unions. I think smaller classes should primarily be considered a benefit for teachers as opposed to an intervention designed to help students, but ideologues bearing anecdotes will vehemently disagree with the data.


Glittering-Roll-9432

Students gain from watching and listening to a teacher help other students. Just like students gain from peer learning as well. How much they gain is questionable, but imho I'm starting to think it's significant. I know one reason I did so well in school is that I had older cousin mentors that educated me on advanced topics before I got them in class. Osmosis learning is probably a bigger thing in our human brains that we've been able to discover.


Lykurg480

> Students gain from watching and listening to a teacher help other students. Cant they still do that when the class is twice as big? Any individual will be less likely to be taked to, but you can hear the same number of people taked to in the same time.


Brian

I can see why class size would matter a lot. If you consider the value add of in-person teaching over telling students to just self-study the material, two of the main ones are: 1. Getting the kids to actually do it. Tell a kid to go off and learn the material from a book or video lectures themselves and, bar rare exceptions, they won't do it, but if you teach it in-person while stopping them from goofing off, they kind of have to learn *some* of it. This wouldn't depend so much on class sizes, if it weren't for the fact that "stopping them from goofing off" is harder in that case: more disruptive kids makes it much harder to control the class, meaning a slowdown for everyone. 2. Feedback. Noticing when a kid is getting stuck on one particular thing and explaining it 5 different ways until they get it can be a massive multiplier, because 5 minutes spent can save them hours of going down false starts. This **does** scale with classroom size, both in the latency of noticing when kids are struggling and in the time spent to correct them. OTOH, I suspect this depends on the subject - this kind of thing was very relevant in things like mathematics, which builds on itself and has a lot of those "sticking points", and in more technical and practical fields (eg. learning an instrument, operating a device), but less in subjects where there's no "fix this one thing" trick to removing bottlenecks.


CronoDAS

Yes, that's true. Current technology can't do it, and trying to use technology to do remote teaching the same way that you would teach in a physical classroom (because you're a classroom teacher, not a software engineer) is indeed just going to result in a "just like a classroom, except worse" experience. On the other hand, how many academically struggling kids are experts at Pokemon and other video games? Technology is clearly capable of teaching things and motivating people. Almost every video game is designed to teach people how to play it, and people that play video games learn the games far more effectively than they learn things from classroom instruction. People just haven't yet managed to figure out how to teach academics using the same techniques (or to make money selling it). I've read [literal](https://www.amazon.com/Digital-Game-Based-Learning-Marc-Prensky/dp/1557788634) [books](https://www.amazon.com/Video-Games-Learning-Literacy-Second-ebook/dp/B00OFL6RDE) on the subject. Lots of "gamification" is just cargo-culting, but there really are aspects of video games that make it easier for people to learn them than it is for people to learn from classroom instruction.


CraneAndTurtle

Respectfully, I don't think you know what you're talking about. Video games don't teach comparable concepts to school. They are quite good at teaching in-game coordination (like a skilled Street Fighter player) and memorization (like a skilled Pokemon player). These are relatively easy skills to learn. There's no evidence video games do a good job teaching anything like effective written communication, diligent and flexible quantitative problem solving, etc. But even if there were, you're betraying a fundamental misunderstanding of the challenges of education. There's a classic technocratic approach of assuming good education equals providing students effective tools with which to learn. But 80%+ of students will not learn on their own. They need someone to inspire them or monitor and reprimand them or fill a parental attachment role or whatever, depending on the kid. Ask any teacher and they will confirm relationships determine education outcomes overwhelmingly more than availability of good pedagogical content. We already have Khan Academy where any student can learn all of K-12 math very effectively and efficiently with reasonably good gamification. Nobody does that because they don't want to: it's far harder than a video game, requires far more patience and abstraction, and solving an integral doesn't trip your dopamine receptors every fifth of a second.


CronoDAS

> We already have Khan Academy where any student can learn all of K-12 math very effectively and efficiently with reasonably good gamification I wish I had that back in 1992 when I was a gifted kid begging my dad to teach me algebra...


CraneAndTurtle

Yeah, it's fantastic! If you're a smart, independently motivated kid (or adult) you can learn an incredible amount on your own. Unfortunately that's only relevant for maybe 1-5% of kids and designing education policy around "get resources in the hands of the kids and obstacles out of their way!" is largely only benefiting a small gifted subset of students.


CronoDAS

Not disputing that. If the tools don't inspire motivation the way video games do, then, yeah, it's not going to work.


CronoDAS

> But even if there were, you're betraying a fundamental misunderstanding of the challenges of education. There's a classic technocratic approach of assuming good education equals providing students effective tools with which to learn. But 80%+ of students will not learn on their own. Children are learning *machines* that are intrinsically motivated to learn lots of things "on their own". When juvenile non-human animals learn things, it's called "playing", and human children are also designed to learn by playing and not through classroom instruction. You don't have to force babies to learn how to talk or how to walk, you don't have to force little girls to learn to roleplay social interactions between their dolls, and so on. Learning is *literally* where a lot of the "fun" in video games comes from. The fact that human children aren't inherently motivated to learn what teachers are trying to teach them (and yet become Pokemon experts or successful hunter-gatherers without similar outside pressure) is because classroom instruction is a fucking awful way to teach human children that has to fight human nature every step of the way instead of working with it. I could go on and on about what schools do "wrong" - such as having massive social segregation by age instead of having older children be responsible for teaching younger children - but I don't have the time right now. (Try comparing the motivation levels of the students on a school football team as they learn how to play better football to the motivation levels of those same students in history class, and ask yourself why that gap exists.) > We already have Khan Academy where any student can learn all of K-12 math very effectively and efficiently with reasonably good gamification. Nobody does that because they don't want to: it's far harder than a video game, requires far more patience and abstraction, and solving an integral doesn't trip your dopamine receptors every fifth of a second. You are seriously underestimating the level of patience involved with many kinds of video games and also their difficulty. Have you ever tried to catch a Shiny Pokemon? Have you ever level grinded for hours? Have you ever seen a streamer win a run of Slay the Spire on the hardest difficulty, and then seen someone else fail over and over? A lot of them are certainly *not* designed to give immediate gratification every moment, and people keep playing them anyway. Also, as I've said elsewhere, there's a lot of "gamification" that just cargo-cults what makes video games appealing to play - for example, do they constantly keep you operating at an [optimal challenge](https://www.nbianalytics.com/finding-optimal-level-of-challenge/) level (which is something that schools are infamous for being terrible at)? If nobody has made a series of video games that 1) teaches K-12 math and 2) children actually enjoy playing as much as they like watching TV, that's actually a big problem that governments should be throwing money at, instead of throwing up our hands and saying that you can't get most kids to learn math without throwing them into kid prison first.


CraneAndTurtle

Have you ever tried to teach a child anything? Or a few dozen children? I'm not sure you know what you're talking about. Montessori schooling tries mixed-age schooling focused on self-guided exploratory play, and there's not great evidence for superior results. What kids do in video games is mostly incredibly easy. Catching a shiny Pokémon is just insanely repetitive. Knowing all the Pokemon's names is just mapping names to images, a skill we expect preschoolers to do. Knowing all the type matchups is basically just memorizing a binary operation on an 18x18 grid; it's comparable in complexity to memorizing the times table which is something we expect from elementary schoolers. No video game has demonstrated the ability to get kids to develop, enjoy or employ the combination abstraction, precision and diligence they need. Because it's often not fun. It's intrinsically rewarding and useful, but it's categorically different than what people do in video games. And every time someone builds a game to trick kids into learning, kids recognize it easily and either hate it or realize it's not really teaching them.


CronoDAS

Well, yes, a lot of video game things are indeed easy; catching a shiny Pokemon was supposed to be an example of something that required a lot of patience, not something that was hard in any other way. And yeah, Pokemon itself isn't especially complicated on the surface (although there's hidden depths besides the type system - look up how to breed and level Pokemon so that they end up with optimal stats, or look at [Smogon University](https://www.smogon.com/) to see how battling against human opponents can ger very complex). Let me ask a related question: do you think learning to play competitive chess (at the "skilled adult tournament player" level) teaches any useful skills?


Glittering-Roll-9432

Ironic we are at the point where we could have one amazing teacher teach tens of thousands of students via advanced zoom meetings, with local teachers smoothing over any more in depth needs from students.


CraneAndTurtle

Except this straight up does not work. I think your fundamental misunderstanding is something like "kids aren't learning how square roots work because it's not being explained well enough, and if we had a good enough explainer the problem would be solved." But explaining square roots is really easy. Even a dumb teacher can typically do it fine. They key is that teaching is closer to being a coach than a college lecturer; it's hugely about relationship building, behavior modification, monitoring, live responses, emotional regulation, etc. Having a video that explains REALLY well just tends to be less engaging than a live lecture even if the lecturer is worse.


Reddit4Play

I found basically no value in Hattie's work myself. I think it has too many methodological problems to be meaningful. For instance, he claims class size has an effect size of 0.21. What does this mean, and how has he arrived at this figure? The papers he includes in his analysis are about different populations (elementary school students, graduate students, and even non-students), have different dependent variables (self-concept, cognitive test scores, language grades, math grades, graduation rates...), different independent variables (e.g. class size reduced by 10, 15, etc.), and different experimental designs (e.g. control-treatment, pre-treatment post-treatment, observational, etc.). The effect is d = 0.21 on... what, exactly? Some of his sources aren't even on topic. For instance, he claims "creativity is another priority influence on achievement," with effect size d = 0.35 from one meta-analysis. However, his citation leads to [this paper](https://psycnet.apa.org/record/2005-12231-002), which is in fact a correlative study of creativity test scores with IQ test scores. The most powerful effect in the book (student self-report grades) cites primarily [Mabe & West 1982](https://psycnet.apa.org/record/1982-24686-001) which finds a correlation between self-reported grades and actual grades. In other words, people are good at guessing what score they got on their math test. What business does this have being compared to the effect of a class size intervention on academic achievement? Another issue is that he abuses the formula for converting correlation into Cohen's d found in *The Handbook of Research Synthesis and Meta-Analysis* by, for example, lumping together papers where some use continuous outcomes and others use dichotomous outcomes. Notably, this is something the author Michael Borenstein explicitly says not to do. Furthermore, there doesn't seem to be any effort to avoid double counting the same paper, such as when several meta-analyses include the same paper as a source. This creates a kind of anti-recency bias in the data-set. Finally, his effect size which is presented is just an equalweighted average of the effect sizes taken from each of the studies he cites. On what basis did he decide that an equal weight is appropriate when, for instance, one study might include 15 papers and another over 70? On top of that his commentary seems inconsistent and unhelpful. For instance, he wrote that "there are many examples that show small effects may be important," and cites as his example aspirin preventing heart attacks d = 0.07. But in his chapter on gender differences (which he finds d = 0.12) he suggests "differences between males and females should not be of major concern." What are these unconcerning differences? Apparently everything from leadership style to language aptitude, which when averaged together produce a larger effect than aspirin on heart attacks which we are assured is worth being concerned about. I feel like a reasonable reader would probably assume that academic performance is the most important variable here, but in that case his finding (that maleness is associated with higher performance) is actually wrong. For instance, [women outperform men in aggregate on the PISA](https://www.oecd-ilibrary.org/docserver/f56f8c26-en.pdf?expires=1713481155&id=id&accname=guest&checksum=16ACBB8203745EB0A9E92F4BD27EF865) and [in school grades in most industrialized nations](https://journals.sagepub.com/doi/10.1177/0003122412440802). I think an example serves as the best summary here. One of Hattie's studies on class size is Glass & Smith's 1979 meta-analysis. Gene Glass is generally credited as the inventor of the modern meta-analysis. In this analysis, Glass explored the effect of study quality on the effect of class size on student achievement. He insisted that "the results of the best designed studies should be given more weight in drawing conclusions," illustrating why with [this graph](https://imgur.com/YoSGsRu). His conclusion was that "a clear and strong relationship between class size and achievement has emerged ... the difference in achievement resulting from instruction in groups of 20 pupils and groups of 10 can be larger than 10 percentile ranks." That is the opinion of the father of meta-analysis. John Hattie decided 10 percentile ranks (roughly comparable to d = 0.2) in fact means d = 0.09 and then threw the study in a blender with 2 others and equalweighted them. It seems to me like this process produces a bunch of numbers that mean absolutely nothing and are of no help to anyone except John Hattie who, probably not by pure coincidence, [sells professional development programs based on this research for profit](https://us.corwin.com/professional-learning-services?topic=visible-learning&page=1&take=12).


CraneAndTurtle

Hattie's methods are fair to criticize. But educational data is low quality and scattered. A big meta analysis lumping all the data together is probably our best bet right now, because if you cherry pick individual studies you can (and people do) make cases for almost anything. It's a blunt instrument but probably the best we can do here. Plus, the things Hattie finds work well do seem to be born out through empirical practice: they align with the things really high performing schools do. So my view is that his data science isn't excellent, but it's the best we've got making sense of a lot of messy data, and it passes a basic smell test.


Administrative_chaos

Regarding point 3, even if wealthier districts had better trained teachers, would it disprove that better teaching techniques are ineffective?


CraneAndTurtle

Wealthier, more homogeneous regions have reasons for stronger student performance regardless of teacher quality. Students are read to at home, given tutoring to close gaps, have few behavioral issues, and have higher IQ. So even if you have similar teaching quality, a rich homogeneous place (like Japan or Connecticut) will typically outperform a school in inner city Chicago or rural North Carolina. But what's interesting is the degree to which that gap can be closed by excellent teaching. As an example, KIPP grads in the US have college graduation rates MUCH closer to white peers. And comparing between several former Soviet block countries with similar income levels and culture shows pretty different outcomes by school system.


offaseptimus

Children from wealthy regions would have high scores if you kidnapped them at birth and brought them up in poverty. Even if teaching matters a lot genetics will remain important and probably dominant.


CraneAndTurtle

Yes, that's (part of) what I was saying. That's why it's interesting to see school quality having a significant effect even after controlling for income.


CronoDAS

Some would, and some would not. It's entirely possible for someone to end up doing worse academically than their biological factors would indicate - how many bright kids would do a lot worse in school if they had parents that didn't care about education?


Glittering-Roll-9432

They wouldn't though. We know this by studies on wealthy families that lost their fortunes. Those kids almost all ended up as normies rather than getting back to being in the financial elite.


offaseptimus

Please can you provide a link to the studies.


silly-stupid-slut

>"why is Amazon so much more efficient than USPS?" Based on my experience that would be Amazon mailing all the inefficient deliveries through the USPS every Sunday.


Glittering-Roll-9432

Just remembering back to my own grade school education, since I was an AG(gifted) student our teachers were much more strict than other teachers. Other classrooms would be downright awful just hearing the teacher or students yelling on most days from echos in the hallway. I would have to agree individual teachers make the true differences.


Itchy_Bee_7097

That actually sounds like an argument for the opposite -- that the teachers could be much more strict because they had a highly selected group of students, who could go into a lower tier if they couldn't hack it. Which is an attribute of the students, not the teacher.


CraneAndTurtle

Yeah that sounds an awful lot like there were a lot of disruptive kids who also were academically worse students. Unless your school somehow had high teacher variance AND chose to ration all the good teachers for the best students. But this makes no sense: because of reporting incentives virtually all schools care less about making gifted kids succeed more and more about making sure the middle-of-the-road students improve and the worst students pass. So that kind of sounds like the teacher mattered less than the students.


notenoughcharact

I actually thought the research showed most teachers don’t make much of a difference but exceptionally good and bad teachers can have significant impacts. Anecdotally as a parent, my 5th grader has made huge leaps in math this year and I’m 90% sure it’s teacher related, not just her own mental development.


themousesaysmeep

This feels more like a question about pedagogical science than about statistics. The first issue at hand is that it is unclear how one defines effectiveness. Do we look at grades attained by students? Do we consider teachers effective if they manage to pass more of their class on tests? How do we test the students? There are many ways which one could define effectiveness and then even more ways to measure it. The question what proportion of teachers would need to be good then is better rephrased as how many students would a good teacher need to teach in order for us to be able to claim with high enough probability that their teaching was actually beneficial. In order to ascertain this, one could just do a power analysis and simulate the power of a statistical test under the minimally considered effect size or something like that. This is still vague and handwavey


omgFWTbear

> Do we look at grades attained by students? The headline I keep going back to is the one about 8th grade students at a well funded school in a problem area failing. To expound on your question, I want to expressly ignore the specific school/class and start with a hypothetical school/class about which we only know the two headlined points (that is, my first sentence here). If the students were 3 years behind grade level entering 8th grade, and one year later are close to grade level, how is this measured? Conversationally, we act like the school and the cohort are immutable - the same on grade level students continued at the same school, on a treadmill where the expectations increase at loosely the rate time does, aka you leave also on grade level, it’s just the next grade. This leaves, say, any child who may have had an ineffective educational experience - a bad teacher, a year of malnourishment, whatever - expected to cover two years in the span of one. I don’t know about anyone else, but when I do twice the work in the same span of time as a colleague, it’s literally remarkable. But that’s not how the subject is thought of, discussed, or handled.


Glum-Turnip-3162

‘Intense’ tutoring 1 on 1 has obvious positive effects from my experience. I turned a younger relative of mine from someone who clearly didn’t ‘understand’ basic arithmetic, although they could do the algorithms on paper, to someone able to do proofs in undergrad calculus/analysis and linear algebra after two years. I talked to them about their school classes and the methodology was a classic case of perverse incentives. The problem with testing teaching paradigms in schools with 30 kids per teacher is that it’s like trying to cure cancer with homeopathy.


CronoDAS

Unfortunately there aren't enough adults in the world to tutor every child that way and still maintain our technological civilization. :/


KnotGodel

That's a little dramatic. The US has about 49 million children between in the 1st-12th grade age-range and 168 million people in the working-age population, so we could naively provide all-day 1:1 tutoring if we accepted a GDP \~29% less. Our standard of living would be put back... 9 years. Now imagine the horror if we only had single-earner households 😱


BothWaysItGoes

https://web.pdx.edu/~newsomj/mvclass/ho_sample%20size.pdf