Judgment and Decision Making, vol. 7, no. 6, November 2012, pp. 761-767

Debiasing egocentrism and optimism biases in repeated competitions

Jason P. Rose*   Paul D. Windschitl#   Andrew R. Smith%

When judging their likelihood of success in competitive tasks, people tend to be overoptimistic for easy tasks and overpessimistic for hard tasks (the shared circumstance effect; SCE). Previous research has shown that feedback and experience from repeated-play competitions has a limited impact on SCEs. However, in this paper, we suggest that competitive situations, in which the shared difficulty or easiness of the task is more transparent, will be more amenable to debiasing via repeated play. Pairs of participants competed in, made predictions about, and received feedback on, multiple rounds of a throwing task involving both easy- and hard-to-aim objects. Participants initially showed robust SCEs, but they also showed a significant reduction in bias after only one round of feedback. These and other results support a more positive view (than suggested from past research) on the potential for SCEs to be debiased through outcome feedback.


Keywords: egocentrism, shared-circumstance effect, comparative judgment, optimism, feedback and experience.

1  Introduction

Competition abounds in everyday life, where we contend with others for top grades, jobs, trophies, and mates. When resources are at a premium, it is optimal to enter into competitive environments in which we are certain to fare well and to avoid those in which we are doomed to fail. However, when people evaluate their likelihood of success in competitions, they are subject to a robust bias: a competitor should consider the strengths and weaknesses of the self and the other competitor(s) (Burson, Larrick, & Klayman, 2006; Moore & Kim, 2003; Windschitl, Rose, Stalkfleet, & Smith, 2008), but people often give too much weight to evidence related to their own strengths and weaknesses and too little weight to such evidence about the competitor (Pahl, 2012; Rose, Jasper, & Corser, 2012; Rose & Windschitl, 2008; Windschitl, Kruger, & Simms, 2003; Windschitl et al., 2008). This egocentrism results in overoptimism when the circumstances of the competition are favorable, such as when competitors in a trivia game learn that the questions will be from an easy category—even though they’ll be easy for everyone (Moore & Kim, 2003; Windschitl et al., 2003). Egocentrism also results in overpessimism when the circumstances are unfavorable (e.g., a difficult trivia category). This phenomenon of being more optimistic when shared competitive circumstances are favorable than when they are not has been dubbed the shared-circumstance effect (SCE; Windschitl et al., 2003), and it has been replicated across a variety of settings (e.g., general knowledge tasks, card games, athletic competitions).

In most previous studies on the SCE, participants were presented with novel, non-repeated competitive situations. These situations did not allow people to learn from past experiences or from feedback within the immediate competitive context. However, in everyday contexts there are often opportunities to learn how a shared circumstance tends to affect the self, others, and outcomes. For example, when a tennis tournament is played during a string of windy days, players can have several opportunities to observe how the weather affects themselves and their competitors.

To examine whether egocentrism and SCEs persist in repeated-play contexts, Rose and Windschitl (2008) had pairs of participants compete against each other in multiple rounds of a trivia contest that involved easy and hard categories. In each round, participants estimated their likelihood of beating their competitor, answered trivia questions, and received feedback about who won. In initial rounds, there were robust SCEs; participants were much more optimistic about winning easy categories than hard ones. If they encountered the same hard and easy categories across rounds, the participants learned from feedback. That is, the SCE shrank—but slowly—across six rounds with the same categories. The SCE was never eliminated, even after six rounds. Also, for a seventh round, participants were told there would be new categories. The SCE for that round dramatically and fully rebounded; it was every bit as large as observed for Round 1. These results provide a bleak view of how well people can learn from feedback and avoid SCEs. Moreover, results from a study by Moore and Cain (2006), which also used repeated plays with feedback, suggest an even bleaker view. Those researchers also used easy and difficult quizzes as shared-circumstance manipulations, but found virtually no reduction in SCEs after numerous rounds with feedback. (For related findings, see Study 3 of Burson & Klayman, 2005.)

Are people’s abilities to learn to avoid SCEs—based on feedback—really as bleak as this prior research might suggest? We argue that some shared circumstances are more transparently shared than others, and this may affect how readily people learn to avoid the bias that creates SCEs—and how easily they can transfer this learning to a slightly new set of shared circumstances. By transparently shared, we are referring to how obvious it is that a circumstance that is helpful or hindering to the self will affect others in largely the same way.

In the present study, we examined the influence of repeated feedback on SCEs. However, unlike past research, we used a competition in which the difficulty of the shared circumstance (relative to competitions, for example, involving easy and hard trivia categories) is more transparently shared. In a multi-round paradigm, participants competed in object-tossing competitions. In each round, two competitors each had 8 throws per object—attempting to land the object inside a target area. There was always one easy-to-aim object (e.g., a beanbag) and one hard-to-aim object (e.g., a paper plate), which constituted our shared-circumstance manipulation. Full feedback was given during and after each round, and predictions were solicited before each round and also before a final round with novel objects.

We suspected that a tossing competition, rather than a trivia competition, would produce less bleak results about the debiasing of SCEs through repeated play. In the case of trivia, watching one’s competitor fail to answer trivia questions doesn’t give any insight about why the category is difficult for that person. Also, knowing that one’s competitor struggled on a difficult category does not provide obvious information about why he or she might struggle on another difficult category. However, in the case of throwing, watching a competitor fail when tossing an object probably illuminates the relevance of specific shared, situational circumstances (i.e., the properties of the specific objects) as well as a more general awareness of the relevance that object properties have on throwing success—for anyone. For example, seeing a paper plate fly unpredictably will likely give an observer a clear impression that the object will fly unpredictably regardless of who is throwing it. For participants, this enhances the appreciation that one’s struggles are not primarily due to personal characteristics but to properties of the tossed objects; this would then be useful in mitigating egocentrism and SCEs, even when new objects are introduced.

Consequently, we expected that, even though participants might reveal a robust SCE at Round 1 (prior to any feedback or observations regarding their competitor), they would show a pronounced learning effect after the feedback and observations of Round 1. That is, they would show significantly reduced SCEs starting immediately after Round 1. We also expected that this learning would be generally transferable. That is, unlike prior work (Rose & Windschitl, 2008), we expected that, when participants learned that they were throwing novel objects in the final round, they would not revert to the same degree of SCEs as they had exhibited in the first round.

2  Method

2.1  Participants and design

Fifty undergraduates participated to fulfill a research-exposure requirement (one pair was removed due to having incomplete data). We used a 2 X 6 design, where object difficulty (easy or hard) and round (1, 2, 3, 4, 5, or 6) were both manipulated within participants. We also manipulated which of three object sets were used. These object sets are described below, but preliminary analyses revealed no interactions with this factor, so it will not be described in the results below. Other counterbalancing factors are described below. Sample size was predetermined and was intended to provide adequate power in light of our prior research using a similar design (Rose & Windschitl, 2008).

2.2  Information about the objects and competition

The competition involved tossing objects to a 4 X 4 ft (1.21 X 1.21 m) bull’s-eye target on the floor from about 10 ft (3 m) away—earning 0-3 points depending upon landing location. Both competitors threw the same two objects (many times). Based on pilot testing, we knew that one of the objects was generally easy to accurately throw and the other was generally hard—this constituted our within-participant manipulation of object difficulty. Depending on the session, the easy and hard objects were a small beanbag and paper plate (Set 1), a small (playing card) box and round plastic container (Set 2), or a flattened toilet paper roll and an irregularly-shaped eraser (Set 3), respectively.

2.3  Procedure and measures

Participants entered sessions in pairs and were introduced to each other and to the basics of the competition (including tie-breaker procedures). Prior to the competition, each participant was given practice throws in private (8 throws per object), while the co-participant was in an adjoining room. The experimenter told each participant what his/her score would have been if scored.

Prior to Round 1, each participant completed a form asking him/her to estimate the likelihood that he/she would score more points in the upcoming round than their co-participant for each object, from 0% (no chance) to 100% (certainty). Specifically, participants were asked: “For each item, please estimate the likelihood that you will be able to beat your competitor in throwing the item accurately at the target. That is, what is the likelihood that you will have more points than will your co-participant? You can use any number between 0% and 100%. 0% would mean you think there is no chance you will beat your competitor. 100% would mean you are absolutely certain you will beat your competitor.” Participants also estimated their own and their co-participant’s number of points for each object out of 24 (8 throws for up to 3 points each). Specifically, participants were asked “For each item, please estimate the number of points you will have across 8 throws” and “For each item, please estimate the number of points your co-participant will have across 8 throws.”

For Round 1, one participant took all 8 throws (in public) for the first object, then all 8 throws for the second object. The second participant then did the same. The order in which the easy vs. hard objects were thrown was varied between-participants. Also, the order in which the two participants threw alternated across rounds. At the end of Round 1, the experimenter tallied the number of points and wrote the scores on a white-board in view of the participants. After this feedback, participants went on to the next round and continued through 6 total rounds. Each round involved another set of likelihood and score estimates, throws using the same 2 objects, and full feedback.

After Round 6, participants were shown 4 novel objects (2 easy, 2 hard) that would ostensibly be thrown for a 7th round. Participants were given two practice throws each for these novel objects (in public) before making likelihood and score estimates. At this point, and before actually having to make these throws, participants were debriefed and dismissed.


Table 1: Point totals as a function of object difficulty and round.
 Round
1    2    3    4    5    6    
Easy object    
M15.7815.7616.1415.7216.3216.60
SD3.803.974.174.373.533.18
Hard object    
M7.247.908.067.887.967.70
SD4.714.884.804.464.694.76
Note. The point totals for each round are out of a maximum of 24 for each object.

3  Results and discussion


Table 2: Mean likelihood and absolute score estimates by difficulty and round.
 Round 
 1    2    3    4    5    6    
PostComp
Likelihood estimates
Easy       
M62.360.260.557.559.559.7
59.1
SD17.121.322.126.126.126.4
17.1
Hard       
M44.449.350.550.551.852.7
50.7
SD19.917.221.921.322.222.4
18.9
SCE       
M17.910.910.07.027.666.98
8.37
SD23.724.925.122.432.427.5
13.4
Score estimates
Easy-self     
M
15.115.915.816.216.316.7
13.3
SD
3.844.414.563.984.214.03
4.66
Easy-other     
M
14.615.815.816.515.916.0
12.2
SD
3.944.705.134.554.694.42
4.63
Hard-self     
M
8.789.088.829.028.948.92
9.36
SD
4.324.684.664.374.294.59
4.93
Hard-other     
M
9.669.349.149.108.668.34
8.93
SD
4.925.465.234.944.364.44
4.68
Note. The SCE index was calculated by subtracting likelihood estimates made for hard objects in a given round from likelihood estimates made for easy objects in the same round. Values in the “Post-comp” column were mean likelihood and score estimates made for a novel set of easy and hard objects after all 6 rounds of competition had finished.

3.1  Difficulty manipulation check

Analyses of actual scores verified that the objects we identified as easy from pilot testing were, in fact, easier. That is, they resulted in higher scores (M = 16.05; SD = 3.18) than did the hard objects (M = 7.79; SD = 3.89), F(1, 49) = 537.33, p < .01. This effect did not differ across round (F = 0.57, p = .72), and there was no main effect for round (F = 1.02, p =.40). (See Table 1.)

3.2  Likelihood estimates


Figure 1: Mean likelihood estimates as a function of difficulty and round.

Table 2 displays means and standard deviations for participants’ likelihood estimates (and other estimates) across rounds. An important initial question is whether there was any SCE at Round 1. Indeed, there was—likelihood estimates were higher for easy objects than hard objects—by 17.9%, t(49) = 5.35, p < .01, d = .76. Because each competition involved 2 competitors, exactly one of which would win, the average of the likelihood estimates should be at 50% regardless of category difficulty. However, participants were significantly overoptimistic (compared to 50%) about a victory for easy objects (M = 62.3), t(49) = 5.09, p < .01, d = .72, and overpessimistic for hard objects (M = 44.4), t(49) = −1.99, p = .05, d = .28. Critically, the size of the SCE dipped significantly by Round 2, where it was 10.9%. That is, the interaction of a Difficulty (Easy or Hard) X Round (1 or 2) ANOVA was significant, F(1,49) = 4.34, p < .05, η p2 = .08. This result is different from those of similar comparisons in Rose and Windschitl (2008), where Round 1 feedback did not have a significant impact on SCEs seen in Round 2 predictions.

For testing patterns incorporating data from all rounds, we conducted a Difficulty (Easy or Hard) X Round (1, 2, 3, 4, 5, or 6) ANOVA. A main effect of difficulty reveals a robust overall SCE, where participants provided higher likelihood estimates for easy objects (M = 59.95; SD = 20.08) than hard objects (M = 49.87: SD = 15.24), F(1, 49) = 13.61, p < .01, η p2 = .22. The round main effect was not significant, F(5, 245) = 0.50, p > .70, η p2 = .01. Most importantly, the Difficulty X Round interaction was significant, F(5, 245) = 2.32, p < .05, η p2 = .05, indicating that the magnitude of the SCE shifted across rounds. Figure 1 visually represents the nature of this shift, with the SCE shrinking across rounds. Paired t-tests between likelihood estimates for easy vs. hard objects reveal significant SCEs in Rounds 1, 2, 3, and 4 (all ts > 2.21, ps < .05), but non-significant SCEs in Rounds 5 and 6 (all ts < 1.8, ps > .05). We submitted the SCE values to a regression analysis with round as a predictor. This linear trend analysis confirmed that the SCE significantly decreased across rounds, β = −.86, t(5) = -3.43, p < .03.1

3.3  Accounting for changes in SCEs


Table 3: Results for regression analyses with likelihood estimates as the criterion and score estimates as predictors.
 β (Self)β (Competitor)Differentialr
Round 1.85**−.35**.50.76
Round 2.90**−.51**.39.74
Round 3.80**−.53**.27.57
Round 4.91**−.69**.22.70
Round 5.82**−.68**.14.73
Round 61.05**  −.84**.21.82
Post-Comp.86**−.45**.41.68
Note. Regression analyses were conducted between-participants with self and competitor score estimates as simultaneous predictors of likelihood estimates (*p<.05; **p<.01). The differential column conceptually reflects the extent to which the self and competitor score estimates have equivalent weight in predicting likelihood estimates. For example, a differential of “0” would reflect that the betas for self and competitor score estimates were equal but in opposite directions (+/−). The values in the r column reflect the mean correlation between self- and competitor- score estimates.

What accounts for SCEs and changes in SCEs across rounds? There are two main possibilities to explore—a differential regression account and an egocentric weighting account. The differential regression account would assume that the SCE and changes in it are a direct result of people’s score expectations for the self and their competitor (Chambers & Windschitl, 2004; Moore, 2007; Moore & Small, 2007). Namely, the account suggests that people initially exhibit a SCE because they expect to score low on the hard objects and score higher on the easy objects. However, their expectations about the scores of the competitor are relatively regressive—that is, closer to a mid-level than are their expectations about scores for the self. This regressiveness can be considered sensible because people know less about the competitor than the self (Moore & Small, 2007; Windschitl et al., 2003). If this regressiveness lessens across rounds (as participants learn more about their competitor’s performances with hard and easy objects), this would result in a decreased SCE across rounds (i.e., increased optimism about winning for hard objects and decreased optimism for easy objects).

The egocentric weighting account assumes that, irrespective of any differential regression effects, people’s attention can be egocentrically biased (Chambers & Windschitl, 2004; Rose & Windschitl, 2008; Windschitl et al., 2008). Even when there is relatively little difference in their score expectations of how the self and their competitor will do on a hard object, their likelihood estimates about winning are disproportionately influenced by thoughts about how hard it will be for them to aim the objects, not how hard it will be for their competitor (for additional detail, see Windschitl et al., 2008). Across rounds, the feedback might reduce this tendency because people will be exposed to the successes and failures of the competitor. It is important to note that the differential-regression and egocentric-weighting accounts are not mutually exclusive.

To check on support for a differential regression account, we created self-other (S-O) difference scores (i.e., a participant’s predicted score for the self minus his/her prediction for the competitor). We then submitted those scores to many of the same analyses (e.g., difficulty X round ANOVAs) conducted for the likelihood estimates. If differential regressiveness provides a fully adequate account for SCEs and changes in SCEs, then the pattern of results in these new analyses should match those found for likelihood estimates. Consistent with this account, S-O difference scores were higher in Round 1 for easy objects (M = .42; SD = 3.65) than for hard objects (M = −.88; SD = 3.41), t(49) = 2.10, p < .04. However, there was not a significant Difficulty (Easy or Hard) X Round (1 or 2) interaction like there was for likelihood estimates, F(1, 49) = 1.46, p = .23. There was also not a significant main effect of difficulty (F = .67, p =.42), round (F = .91, p = .48) nor an interaction (F = 1.19, p = .32) in the Difficulty (Easy or Hard) X Round (1, 2, 3, 4, 5, or 6) ANOVA that included all six rounds. For all of these analyses, the patterns of data were in the right direction, but the strength of these patterns is not nearly enough for us to conclude that differential regression alone is adequate for explaining the patterns of SCEs.

We suspect that egocentric weighting (and the reduction of it) played an important role in the SCE and its reduction. To test for this, we regressed likelihood estimates onto self score estimates, competitor score estimates, and round (after centering; see also Chambers, Windschitl, & Suls, 2003; Rose & Windschitl, 2008). The first step included the main effects and the second step included all 2-way and 3-way interactions.

As expected, higher self score estimates predicted higher likelihood estimates, β = .86, t = 18.38, p < .01. Different from prior research, lower competitor score estimates also predicted higher likelihood estimates, β = −.59 t = −12.57, p < .01. The main effect of round was not significant, β = −.01, t = −.23, p > .10. Critically, the interactions model indicated a significant Self X Competitor X Round interaction, β = .14, t = 3.32, p < .01. To interpret this 3-way interaction, we regressed likelihood estimates onto self and competitor score estimates separately for each round. As can be seen in Table 3, egocentric weighting was relatively strong in Round 1, where self estimates were strong predictors of likelihood estimates (mean β = .85, t = 7.05, p < .01) and competitor estimates appeared to be somewhat weaker predictors, although still significant (mean β = −.35, t = −2.87, p < .01). However, egocentric weighting appeared weaker by Round 6, where both self estimates (mean β = 1.05, t = 7.43, p < .01) and competitor estimates were relatively strong predictors (mean β = −.84, t = −5.93, p < .01).2 Critically, competitor betas tended to become more strongly negative across rounds, based on a regression treating round as the unit of analysis, β = −.97, t(5) = −8.07, p < .01. Contrariwise, self betas tended to remain consistent across rounds, based on a regression treating round as the unit of analysis, β = .52, t(5) = 1.20, p = .30.

Finally, for another way of examining changes in egocentric weighting across rounds, we created a difference score between beta weights to conceptually reflect the extent to which the self and competitor score estimates have equivalent statistical weight in predicting likelihood estimates. For example, a differential of “0” would reflect that the betas for self and competitor score estimates were equal but in opposite directions (+/−). This analysis showed more equivalent weighting of self and competitor across rounds, based on a regression treating round as the unit of analysis, β = −.90, t(5) = −4.26, p < .02. Taken together, these analyses suggest that egocentric weighting was strong in early rounds and weaker in later rounds—providing an explanation for the reduction in SCEs.3

3.4  Post-competition estimates

After Round 6, participants made likelihood estimates about an additional round (Round 7) that was said to involve 4 novel objects (2 easy, 2 hard). Although participants were more optimistic about winning easy categories (M = 59.10, SD = 17.06) than hard ones (M = 50.73, SD = 18.10), t(49) = 4.41, p < .01, d = .63, the overall SCE did not rebound to the robust levels of Round 1 for these novel objects (See Table 2 and Figure 1). More specifically, the magnitude of the SCE for these novel objects (M = 8.37; SD = 13.39) was statistically similar to the magnitude observed in Round 6 of the main competition (M = 6.98; SD = 27.5), t(49) = −.32, p = .75, d = .05, but was significantly lower than the SCE demonstrated in Round 1 of the main competition (M = 17.95; SD = 23.73), t(49) = 2.77, p < .01, d = .41. Moreover, a regression analysis relating participants’ self- and competitor-score estimates (predictor variables) to their likelihood estimates (criterion variable) showed a moderate degree of differential weighting, where self scores (β = .86, t = 11.39, p < .01) and competitor scores (β = −.45, t = −6.01, p < .01) both predicted likelihood estimates (see Table 3).

4  Conclusions

Previous work investigating the impact of experience and feedback on egocentrism and SCEs in repeated-play competitions has revealed no changes at all (Study 3 of Burson & Klayman, 2005; Study 1 of Moore & Cain, 2007; Studies 1-2 of Rose & Windschitl, 2008) or changes that are limited to a restrictive set of conditions (Studies 3-4 of Rose & Windschitl, 2008). In the present experiment, participants showed an immediate reduction in the SCE after only 1 round of feedback. When presented with 4 novel objects, they still exhibited a SCE, but at a level that was much less than they had for the initial round. While confirming a degree of durability of SCEs, the results present a more optimistic view than past research of being able to successfully reduce the bias. Additional results from this work suggest that the reduction in SCEs is likely due to a reduction in both differential regression and egocentric weighting.

Repeated or extended play situations involving shared circumstances are quite common in everyday life. For example, businesses compete across months and years, politicians spar in multiple debates, and athletes compete in tournaments. For these competitors, using feedback to go beyond an egocentric assumption about shared circumstances might be easier to do in some contexts than in others. We believe that in the present study, the salience of the visual properties of the throwing objects (and the effects of those properties) helped people to recognize, from early performance feedback, that both they and their competitor were being similarly influenced by the properties of the objects. Consequently, we suggest that in everyday competitions, when a shared circumstance is salient and performance is being obviously influenced by those circumstances, people may be able to use performance and outcome feedback to avoid being wildly under optimistic (in the case of unfavorable shared circumstances) or overoptimistic (in the favorable circumstances). For example, when a tennis player, playing in windy conditions, can see instances of the wind affecting shots on both sides of the court, he or she might not be unduly pessimistic about how the next set or match will go under the same weather conditions.

References

Burson, K. A., & Klayman, J. (2005). Judgments of performance: The relative, the absolute, and the in between. Ross School of Business Paper No. 1015. http://ssrn.com/abstract=894129.

Burson, K. A., Larrick, R.P., & Klayman, J. (2006). Skilled or unskilled, but still unaware of it: How perceptions of difficulty drive miscalibrations in relative comparisons. Journal of Personality and Social Psychology, 90, 60–77.

Chambers, J. R., & Windschitl, P. D. (2004). Biases in social comparative judgments: The role of nonmotivated factors in above-average and comparative-optimism effects. Psychological Bulletin, 130, 813–838.

Chambers, J., Windschitl, P. D., & Suls, J. (2003). Egocentrism, event frequency, and comparative optimism: When what happens frequently is “more likely to happen to me”. Personality and Social Psychology Bulletin, 29, 1343–1356.

Deegan, J. (1978). On the occurrence of standardized regression coefficients greater than one. Educational and Psychological Measurement, 38, 873–888.

Moore, D. A. (2007). When good=better than average. Judgment and Decision Making, 2, 277–291.

Moore, D. A., & Cain, D. M. (2007). Overconfidence and underconfidence: When and why people underestimate (and overestimate) the competition. Organizational Behavior & Human Decision Processes, 103, 197–213.

Moore, D. A., & Kim, T. G. (2003). Myopic social prediction and the solo comparison effect. Journal of Personality and Social Psychology, 85, 1121–1135.

Moore, D. A., & Small, D. A. (2007). Error and bias in comparative social judgment: On being both better and worse than we think we are. Journal of Personality and Social Psychology, 92, 972–989.

Pahl, S. (2012). Would I bet on beating you? Increasing other-focus helps overcome egocentrism. Experimental Psychology, 59, 74–81.

Rose, J. P., Jasper, J. D., & Corser, R. (2012). Interhemispheric interaction and egocentrism: The role of handedness in social comparative judgement. British Journal of Social Psychology, 51, 111–129.

Rose, J. P. & Windschitl, P. D. (2008). How egocentrism and optimism change inresponse to feedback in repeated competitions. Organizational Behavior and Human Decision Processes, 105, 201–220.

Windschitl, P. D., Kruger, J., & Simms, E. N. (2003). The influence of egocentrism and focalism on people’s optimism in competitions: When what affects us equally affects me more. Journal of Personality and Social Psychology, 85, 389–408.

Windschitl, P. D., Rose, J. P., Stalkfleet, M., & Smith, A. R. (2008). Are people excessive or judicious in their egocentrism? A modeling approach to understanding bias and accuracy in people’s optimism. Journal of Personality and Social Psychology, 95, 253–273.


*
Department of Psychology, University of Toledo, Mail Stop #948, 2801 Bancroft St.,Toledo, OH 43606–3390. Email: Jason.Rose4@utoledo.edu.
#
University of Iowa.
%
Appalachian State University.
1
We also examined the main and interactive impact of our counterbalancing factors (throwing order and object order) on our main results involving likelihood estimates. Some of the interactions were significant, but none seemed meaningfully interpretable expect one—a Round X Difficulty X Throwing Order interaction (F = 3.70, p < .05). The interaction pattern revealed that participants showed a greater reduction in SCEs across rounds in which their co-participant’s performance came last on the preceding round (recall that throwing order alternated across rounds). This is not surprising given that recently watching the other person’s throws would tend to discourage a participant from being egocentric (i.e., from neglecting performance on those throws) and would consequently reduce SCEs.
2
Although one of these coefficients exceeds 1.0, these are, in fact, standardized (as opposed to unstandardized) regression coefficients. Contrary to a widespread assumption, there are occasions in which standardized regression coefficients greater than 1.0 can legitimately occur (Deegan, 1978).
3
The usefulness of comparing regression weights for self and other estimates might be questioned if people’s estimates for others were highly regressive and showed little variability. However, as discussed, differential regressiveness was relatively small and did not change as a function of round or difficulty. Regression analyses also standardize all predictors, thus equalizing variances and eliminating the attribution that one predictor is more influential purely because it has greater variance. Moreover, our main focus is primarily on how the weights for self and other estimates (and differences between those weights) changed over rounds.

We also conducted a different type of analysis examining egocentric weighting and differential regressiveness. Specifically, we conducted a regression analysis in which we regressed likelihood estimates onto S-O difference scores in Step 1, with self scores being entered in Step 2. If self scores are significant predictors above and beyond S-O scores, then this is evidence for egocentric weighting. First, the S-O scores predicted likelihood estimates overall, β = .53, t = 15.43, p < .01, where participants who felt they would outscore their competitor had higher likelihood estimates. Importantly, self scores accounted for unique variance in likelihood estimates beyond S-O scores, β = .30, t = 8.56, p < .01. Importantly, if the contribution of self scores reduces across rounds, this would indicate a reduction in egocentric weighting. Indeed, a linear trend analysis confirmed that the R2Δ contribution of self scores significantly dropped across rounds, β = −.92, t(5) = −4.54, p < .05.


This document was translated from LATEX by HEVEA.