The next part of the output shows the contrasts results, including the Custom contrast setup I used. The first contrast compares reward (-1) against punishment and indifference (both coded with 0.5). The second contrast compares punishment (coded with 1) against indifference (coded with −1). Note that the codes for each contrast sum to zero, and that in contrast 2, reward has been coded with a 0 because it is excluded from that contrast.
The t-test for the first contrast tells us that reward was significantly different from punishment and indifference (it's significantly different because the value in the column labelled p is less than our criterion of 0.05). Looking at the direction of the means, this contrast suggests that the average mark after reward was significantly higher than the average mark for punishment and indifference combined. This is a massive (i.e., so big that if these were real data I'd be incredibly suspicious) effect: d = -2.32 [-3.34, -1.29].
The second contrast (together with the descriptive statistics) tells us that the marks after punishment were significantly lower than after indifference (again, significantly different because the value in the column labelled p is less than our criterion of 0.05). This effect is also very large, d = -1.12 [-2.09, -0.15]. As such we could conclude that reward produces significantly better exam grades than punishment and indifference, and that punishment produces significantly worse exam marks than indifference. In short, lecturers should reward their students, not punish them.