Thursday, February 27, 2020

Reading 5.

For my readings this week, I've done further research into both my topics and created subtopics in relation to managing complexity and avoiding dominant strategies, such as what exactly are game complexities? The different ways of measuring game complexities, examples of dominant strategies, and comparing the dominant strategy solutions to the Nash equilibrium solution. Here are two subtopics I've written out along with a couple more in a word document;

Measures of game complexity.

↪ The four measures of game complexity are as follows; state-space complexity, game tree size, computational complexity, and decision complexity. What are they may you ask? Well, the state-space complexity is the number of gaming positions accessible from the original position of the game. When this is too difficult to investigate, one can also count some illegal positions, meaning some game positions that can never result from in the course of a game. The game tree size is the total amount of potential games that can be played. The game tree size is usually bigger than the state space because the same positions can happen in the games by mixing the moves in different orders. The decision tree is a subtree of the game tree size where with each position labeled with "player A wins", "player B wins" or "drawn", that position can be demonstrated to have that value by investigating other positions in the chart. Finally, the computational complexity of a game describes the difficulty of a game as it grows particularly large. This concept doesn't really apply to any particular game, but rather to games that have been generalized so they can be made bigger. 

Comparing a dominant strategy solution to the Nash equilibrium solution.

↪ Usually, the best strategy will be the same no matter how all other players act, also known as the dominant strategy. On the other hand, there also exists the Nash equilibrium, which doesn’t describe a particular strategy, but more so a mutual understanding whereas each player understands the other player's optimal strategies and takes them into consideration when optimizing their own strategy. In the dominant strategy, each player's strategies are unaffected by the actions of other players. This reduces the assumption of the Nash equilibrium, where each player knows the strategies of other players. The Nash equilibrium was founded in a one-page article in 1950 by John Forbes Nash, Jr., Where he was describing a stable-state equilibrium in a multi-player situation where no participant gains any sort of change in his strategy as long as the other participants also remain unchanged.

This image was found on pixabay.com

No comments:

Post a Comment

Reading 8.

So I've taken a break from blogging last week to focus more on my group project, but this week I've been doing some research on cre...