So, as we've talked about hashing, let's talk about some big picture items. One thing is, we might want to determine what the better strategy to use is. Do we want to do separate chaining where we have a linked list of nodes? Or do we want to actually put the data inside the table itself? We find that depending on the application you're using a hash table for, the right answer is different. If you have a large pieces of data that are going to take a lot of time to copy inside the array itself, you absolutely do not want to use an array. You'd rather use a pointer in LinkedList to store that data somewhere else. So, if we're thinking about having big records, we're going to want to use something like separate chaining to handle big records. So, using separate chaining for big records means that the other solution is probably better in some other way. In fact, double hashing is great for structure speed. Remember, the array operations are memory optimized because they're sequentially located in memory. So, if we care only about Ross structure speed and we know our data itself is going to be fairly small, we can use a close hashing technique such as linear probing, or double hashing to actually run in a really efficient data structure. For structure speed, we're going to want to do something like double hashing. So, what structure does the hash table replace? Maybe the next question we asked. That structure is going to be a dictionary. So, an AVL tree also provides a dictionary implementation. As we discussed for AVL trees, AVL do a great job with range finding a nearest neighbor. A hash table does an absolutely terrible job with that. You can't say what value is near 42 because 42 hashes to one particular spot in the array, and the value of around 42 may have nothing to do with 42 inside of the actual final array because 41 and 43 may hash with entirely different spot in the array. So, it's absolutely essential that we have the right algorithm for a right application because if we need to do nearest neighbor on a hash table, we're going to see ourselves running an O of n time. So, which structure does a hash table replace? It replaces a dictionary. What constraints exist on a hash table that's not as BSTs, as we just went over. The constraint that really exists is the idea that a binary search tree has a great nearest neighbor approach, while we have no ability to do nearest neighbor and hash table. So, when we think about what algorithms to use, we want to use a tree-like structure if we ever need range finding or nearest neighbor. We want to use a hash table if we're always going to have the exact key. If we're always doing lookups, hash tables are phenomenal. Hash table is going to run an O of one lookup times, and hash table in AVL trees are going to run login lookup times. But, when we want nearby neighbors, its logins do in an AVL tree order in to your hash table. The very last bit is why do we talk about BSTs at all? We talk about BSTs both an introduction to the dictionary structure, and because some of the most interesting problems that we're going to solve are not going to be solvable by a hash table. A hash table is a fantastic general purpose data structure, while an AVL is going to solve particularly great problems. If all you care is a lookup, the hash table is the algorithm for you. We're going to start using hash tables to build even more complex algorithms as you discuss more complex algorithms next week. So, I hope you guys have enjoyed learning about hash tables. It's one of my personal favorite algorithms, and I'll see you next week for a number of new videos on an entirely new data structure. I'll see you then.