Discussion about this post

User's avatar
SD's avatar

I agree with your analysis, but similar questions arise with the human scoring in many states. Years ago a friend worked with a researcher at MIT who discovered that students writing their MCAS essays in a certain pattern were more likely to receive a good score than those who wrote more engagingly or were better at giving an answer to what the question actually asked. Perhaps that problem has been rectified in Massachusetts. I don't know about other states. When graduation relies on the scores on these tests, it is important that we take a much closer look at scoring, whether it is is done by humans or machines.

1 more comment...

No posts

Ready for more?