[BC] Grading - was Engineering school teachers

Gary Peterson kzerocx
Tue Oct 4 09:24:48 CDT 2005


"Curve grading" is an interesting subject to me.  I doubt that even one
teacher in a hundred really grades "on the curve."  Many use set
percentages, such as "93% to 100% is an A", etc.  The level of difficulty of
the test will have a huge effect on the grade distributions with this
method.  I have known many teachers who would do a frequency distribution of
test scores and look for "natural breaks."  These "natural breaks" would
become the lines of demarcation separating letter grades.

None of the above qualify as "curve grading."  Curve grading is based on the
assumption that a good test (one that discriminates levels of understanding
or competency) will, given enough test scores and an average population,
generate a bell-shaped, normal curve.  Because most schools define letter
grades (A=excellent, B=above average, C=average, D=below average,
F=failing), "curve grading" involves correlating statistical methods with
letter grades.  How can one say that a certain kid is average without
knowing what average is?

When I taught five sections of chemistry (about 130 students), I used curve
grading.  All tests were of the objective variety.  A different exam was
given to each group.  However, the exams were identical in scope (same type
of problem, with different data).  After correcting, the mean and standard
deviation were computed on all 100+ test scores.  Any score within plus or
minus 1/2 standard deviation from the mean was average and would have
received a C.  One-half to 1-1/2 standard deviations from the mean defined
the Bs and Ds.  The scores greater than 1-1/2 standard deviations from the
mean defined the As and Fs.  When the exams went back to the students, they
received a numerical score (which was entered in the grade book) and the
letter grade it would have received, if letter grades had been assigned.  I
felt that this gave the student a good idea of where they were in the group.

I kept only numerical scores in the grade book, which resulted in a more
accurate determination of the final grade.  At the end of the quarter, I
would add all test scores, quiz scores, laboratory report scores, points for
class participation and run a mean/standard deviation on the total point
scores for the entire grading period.  From this, I was able to assign the
A-B-C-D-F letter grades required for report cards.  I did deviate from this
procedure when it came to failing grades.  If a kid was making a genuine
effort (coming in for extra help, turning in assignments, paying attention
in class, safe behavior in lab, good attendance, etc.) to pass the course, I
would use "executive privilege" and bump the F to a D.  I never felt that
someone who received a D in high school chemistry would be likely to pursue
a career designing nuclear power plants or brain surgery.  My grading method
would have been entirely defensible in court.  It was entirely objective
(other than bumping Fs to Ds, which wasn't likely to generate complaints).
There were times when students received much higher grades than I would have
liked (because of their behavior and attitude), but my method removed the
subjectivity.  It shouldn't make any difference whether the teacher likes
the student or not.

That's my story and I'm sticking to it.

Gary Peterson, K?CX
Rapid City, SD

"   I've heard of "grading on the curve," but I never knew what it was used
for.
If the bottom 1/5 * of a class were automatically given Fs because it would
take
too much of the instructor's time to bring them to the level of the rest of
the
class, I might understand it, but to boot out half of the students even if
they
only missed one or two questions more doesn't seem to be productive to me.
Ron Youvan "



More information about the Broadcast mailing list