A game can end one of three ways: win, lose, or draw. Winning will increase your ranking, losing will decrease it, and drawing can do either (depending on your opponent's ranking). Now lies a common issue, how do we rank millions of people?

This is a photo of a chess master, Arpad Elo, who won the Wisconsin State Championship 7 times. He presented the ELO system in 1960 to provide a better way to rank players. The system uses some complicated algebra to calculate your new rating after every game. Players usually start at 1200 and can go up or down from there. At first, your initial ranking is very volatile as the number of games is an important factor in determining your rank. As you play more and more games, it is harder to move up or down. The diminishing returns make sense, as you play more games you should start to level out at your true ranking. Here is a distribution graph of USCF (United States Chess Federation) rankings:

**0-1000**: Beginner - players fall for simple traps, and are missing tactical and positional understanding.

**1000-1400**: Novice - players are learning better tactics, and likely learning some of the common openings.

**1400-1800**: Advanced - the strength range of players is definitely widening, and the differences in rating become much larger. The difference in skill between 1800 vs 1400 is much larger than 1400 vs 1000. This follows my earlier comment on the diminshing returns of chess rankings. Many strong club players fall into this range.

**1800-2200**: Expert - the competition at this level is fierce, and right below the master level. Knowledge of openings, tactics, and positioning are really cemented here.

**2200-2500**: Master - players with these ratings have been playing for a long time, and can read multiple moves ahead. They are multiple different mastery titles, including CM, FIDE, and IM (international master).

**2500+**: Grandmasters - these players reign supreme, usually with the prestigious title of Grandmaster. There are an estimated 800+ million chess players in the world, and only 1500 Grandmasters. That is a very, very small fraction of players. The best player in the world, Magnus Carlsen, achieved a peak rating of 2882 in 2014.

How can machines play chess? Well, as you may have guessed, lots of complicated math and computer science. The first algorithm built to play chess was constructed in 1950 by the father of computer science, Alan Turing. Turing's algorithm never actually ran on a computer. The first program that could play a full-length game, beat a human, and run on a computer was the Mac Hack VI built by MIT student Richard Greenblatt in 1967. It was monumental, and the system achieved a ranking of 1529... the average for the USCF was 1500. Mac Hack VI became the world's real first chess engine.

Chess engines use many complex algorithms to find their desired move, but a general consensus is that they are trying to maximize their score and minimize their opponents. It's done by searching through the possible moves and then playing the move with the best score.

Imagine a chess game as a path. Each move takes you down a different path, and each board state represents the potential for billions of different paths. If you're a bit confused, take a look at this tree structure. It is commonly used in computer science algorithms and can help us visualize what many chess engines do.

Here, each node could represent a move, and the value of the node is the score of the move. This is known as a tree data structure, and it has proven extremely valuable for computers. One example is the file system. Each folder leads to other folders and files, and then those folders lead to more files and folders, etc. Trees for chess games are insanely large, but through different tricks, we can reduce the size of the tree which allows the computer to run through more moves.

Computing power and sophisticated algorithms proved to be the bottlenecks for the evolution of chess AI. As they improved, chess engines' capability and rating proportionally increased. Then, 30 years after the first machine beat a human, Deep Blue defeated the world champion, Garry Kasparov, in 1997. The landmark event changed chess. Humans were no longer the best at it, and the best-known chess player is now Stockfish.

By controlling the chess engine's parameters, we can change the strength of their play. Now, when you play against different versions of Beth Harmon, you can appreciate the chess engine behind it.

Thank you for the read!

]]>Leetcode is a must for us engineers these days. Will you encounter it in EVERY software engineering interview? Probably not. Will you encounter DS&A (Data Structures and Algorithms) to some degree in most? Probably.

Here, I will give you two quick explanations (with code), to solving a medium problem -- Find The Duplicate Number.

It asks for O(1) space, but that solution is far more complicated. Our job is not to memorize complicated algorithms like the tortoise and the hare, but to solve problems that need to be solved. Most of the time, space is not an issue... so let's go ahead and solve this!

Sets are simple, yet powerful. It is a collection of UNIQUE items. Sets are treated as their own object in JavaScript, and can be invoked with `new Set()`

. Sets are also based on HashMaps, which provide a constant lookup time. This makes sure they don't add time to the problem.

```
const findDuplicate = function(nums) {
const mySet = new Set();
for (let i = 0; i < nums.length; i++) {
if (mySet.has(nums[i])) {
return [nums[i]];
} else {
mySet.add(nums[i]);
}
}
};
```

So first off, we create a new set that will hold all our unique numbers. Then, a for loop will go through and check if the Set has the value. Lucky for us, JavaScript has the built-in method `.has()`

that checks if the Set contains the value, and returns a boolean. If the Set has the value, this means we have encountered the value before and found our duplicate value! Our solution falls into O(n) time because of the for loop, and the Set requires O(n) space.

Sorting algorithms are an entire branch on their own. There are many different types of them, and JavaScript Arrays have their own built-in sort method. The best sorting algorithms can really do is O(n * logn) time, which is already worse than our previous method. However, this takes constant space, O(1), so it's a tradeoff... but you're still probably better off using the Set method as time is nearly always more important than space.

```
const findDuplicate = function (nums) {
nums.sort(); // O (n * log n)!
for (let i = 0; i < nums.length; i++) {
if (nums[i] == nums[i - 1]) {
return nums[i];
}
}
};
```

Let's take an example array: [3, 1, 2, 3]. After sorting, we have [1, 2, 3, 3]. Now, we know that if we have a duplicate, they'll be right next to each other! Hence, the threes are now adjacent to each other at the end of the array. Now, when we run our for loop, we compare our value to the previous value to see if there is a match.

On the first run, you have 1 checked against undefined. Then 2 compared to 1. Then 3 compared to 2. And finally, we find our duplicate value on the last iteration when 3 is compared to 3! Mwahaha. It's a fun, cheeky solution.

Thanks for reading!

NOTE: You may be tempted to say the last algorithm's time complexity is O(n * n * log n). This could be inferred from the sorting algorithm taking O(n * logn) time, and the loop taking O(n) time. However, this actually results in O((n * logn) + n) time, which just shortens to O(n * logn) time as the extra loop is not considered in Big O notation.

]]>