Replies: 6 comments 13 replies
-
Can you please confirm if this is an issue in version 4.0.0? I think a bug was introduced since then and want to see if that's related. See #155 |
Beta Was this translation helpful? Give feedback.
-
Hi! Now that you mention it, I think I'm also experiencing anomalous behavior with ties. A player gains less conservative skill when tying for first place than when finishing second alone, which doesn’t make much sense. Here's an example: Tied for 1st place: I'm attaching my OpenSkill version for reference: Name: openskill Let me know what you think! |
Beta Was this translation helpful? Give feedback.
-
Hi! Have you identified this behavior as a bug? What is the solution for now, using a model other than Plackett-Luce? Thanks |
Beta Was this translation helpful? Give feedback.
-
**"Hi! I've been testing different beta values, but I’m not convinced by the results. After rereading the conversation, I think there may have been a misunderstanding. As I mentioned, my matches involve individual players, not teams. Let’s imagine a Battle Royale game like Fortnite. Is it easier to win a match with 20 players or with 100? (Assuming equal skill levels.) What’s happening is that the system is penalizing the winner in the match with 100 players. Here are some examples: Match with 8 players (various skill levels): Skill changes after the match: Mixo (Rank 1): ΔMu = 0.613, ΔSigma = -0.000, ΔSkill = 0.613 Match with 4 players: Cris: Win probability = 16.05% Skill changes after the match: Issue: Any insights into this behavior? Best regards!"** |
Beta Was this translation helpful? Give feedback.
-
Hi! I'm attaching a test script with current player data from my database.
With the following output: === Simulation: 2 players === Rating changes after the match: === Simulation: 4 players === Rating changes after the match: === Simulation: 6 players === Rating changes after the match: === Simulation: 8 players === Rating changes after the match: We can observe that in a 2-player match, Spainer, with a 44% chance of winning, gains 0.238 skill points. In a 4-player match, Spainer's win probability drops to 24%, but he gains fewer points (0.222). In an 8-player match, Spainer's win probability drops further to 14%, yet he only gains 0.176 points. Meanwhile, Spawn, who initially had a higher probability of winning in the 2-player match, starts gaining more points than Spainer in the 8-player match. If I keep adding more players, Spainer (or any player in 1st place) would progressively lose points, making placement in a Battle Royale-type game irrelevant. Finishing 1st or 10th wouldn’t matter much. If this is not a bug, the algorithm favors smaller matches because points are distributed among all participants. In larger matches, there’s no significant advantage for the winner. I honestly don’t know how to fix this, as I’m not a mathematician. |
Beta Was this translation helpful? Give feedback.
-
Hello again! One thing I didn't take into account when modifying the compute method is that, while it's true that the first-place winner gains more points when there are more players, the last-place player loses significantly more points in larger matches. This also seems counterintuitive because with more players, winning becomes harder, and thus the penalty for losing should arguably be lower. I'm not a mathematician by any means, and I'm doing everything through trial and error, tweaking things here and there. I don't know if I'm doing something wrong or... Vivekjoshy, have you implemented this system in free-for-all, battle royale, or similar formats? Because it seems a bit incompatible with these, especially when the number of players varies (e.g., 3-player vs. 6-player matches) and when there are different skill levels. Best regards. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
"I've created a system for ranking my board game matches using the Plackett-Luce model. I want to know if the behavior I'm observing is expected. As I understand it, the algorithm assigns the win percentage and conservative skill gain based on the expected result and the level of surprise.
In my ranking, there are players with different numbers of matches and skill levels. I use a match simulator, which, given the current ranking, simulates a match. However, adding more players to the match reduces the conservative skill gain for the first-place player. Here are some examples:
Match with 4 players
Spainer (Rank 1): ΔMu = 0.203, ΔSigma = 0.001, ΔSkill = 0.200
Cris (Rank 2): ΔMu = 0.408, ΔSigma = -0.007, ΔSkill = 0.429
Rubén (Rank 3): ΔMu = -0.189, ΔSigma = -0.003, ΔSkill = -0.178
Maca (Rank 4): ΔMu = -0.760, ΔSigma = -0.027, ΔSkill = -0.679
Match with 3 players
Spainer (Rank 1): ΔMu = 0.215, ΔSigma = 0.000, ΔSkill = 0.215
Cris (Rank 2): ΔMu = 0.425, ΔSigma = -0.014, ΔSkill = 0.466
Rubén (Rank 3): ΔMu = -0.509, ΔSigma = -0.004, ΔSkill = -0.497
Observation:
Spainer earns more conservative skill points in the 3-player match than in the 4-player match, despite having a higher win percentage in the 3-player game (41%) compared to the 4-player game (33%).
If I increase the number of players to 9, the win probability drops to 13%, but the conservative skill gain decreases even further to 0.150.
Question:
Is this the expected behavior of the algorithm? Why does adding more players result in a lower skill gain for the top-ranked player?"
Beta Was this translation helpful? Give feedback.
All reactions