Today our topic is permutations, another basic combinatorial structure that we study because it has numerous applications in the analysis of algorithms. Again, continuing the orientation that we introduced in the last lecture. We're in the second part of the class, where we're surveying fundamental combinatorial classes and mathematical techniques that we covered in the first half of the class, primarily analytic combinatorics, to study properties of these classes. Last time, we studied trees, mostly from the standpoint as unlabeled structures, which we analyzed with ordinary generating functions. This time we're going to switch to labeled structure, that's permutations, and exponential generating functions. Again, there's many more examples in the book than we can possibly cover in lectures. What we'll try to do in lectures is cover some of the interesting ones, and then you can read more and further extend your knowledge by studying from the book. We couldn't possibly cover all the examples that are in the book in lecture. So let's look at the basic properties of permutations. And we talked about permutations before, as an example when introducing labeled structures and analytic combinatorics. So here's just one colorful metaphor to describe permutations that we discussed. You have a group of N students, and they go to a party. And they maybe become inebriated, that when the party's over, they each wind up in a random room. So you have the students have numbers 1 through 16 and the rooms have numbers 1 through 16. Then what you have if you arrange in order by student, what you have is a random ordering of the numbers 1 through 16, or a permutation. So we looked at those from the standpoint of analytic combinatorics as a sequence of labeled atoms where each possible ordering is different. So there's six permutations of three elements, and 24 permutations of four, and so forth. And so there's n factorial permutations of n elements, and from the point of view of generating functions, the exponential generating functions for permutations, the counting sequence is N factorial. And we have a normalizing factor N factorial, so those cancel out, and it's the exponential generating function for permutations is sum of Z to the n of 1 over 1-Z. And this is just a quick review of what we talked about in the analytic combinatorics lecture. So now, there's many, many interesting properties of permutations that have been studied. So one thing that's often of interest is what's called the inverse of a permutation. Another way to think of a permutation is as a mapping of the set of numbers from 1 through N to itself. So our student to room, that's a mapping from 1 to 9, 2 to 12, and so forth, where all the numbers from 1 through N appear in the mapping. So there's a concept known as the inverse of a permutation, which is just the inverse of that mapping. And one way to look at that is to rearrange the permutation table so that it's in order by the rooms, and then flip it. So permutation maps students to rooms, the inverse of that permutation maps rooms to students. So it says that in room 1 has student 7 in it, room 2 has student 13 in it, and so forth. Whereas the permutation told us which student was in which room. And there's lots of direct applications of inverse. How do you compute the inverse? It's a very simple process, here's the code for it. And the code is only slightly complicated because nowadays, arrays in Java and C and other languages are zero-based. The first thing's at zero, and we've been using permutations the first things at one. But let's look at an example and then we'll go back to the code. So if we have this permutation shown on the right, where 1 maps to 8, 2 maps to 1, and so forth. We want to compute the inverse of that permutation. The process is very simple. We start out with an empty array, and the first thing we do to get the inverse 1 goes to 8. So in the inverse, 8 is going to have to go to 1. So we simply put a 1 in position 8 in the inverse. And then we just move from left to right, we put a 2 in position 1, 3 in position 3, 4 in position 7, 5 in position 6, 6 in position 2, 7 in 9, 8 in 4, and 9 in 5, and so forth. So since we know that each thing appears only once, there's no collision in this process. In simply one pass through the array, as shown in the for loop in the code at left, we can fill in the inverse, in this case, the array B. And since the arrays are zero-based, we have to subtract 1 from the permutation number. So when 2 goes into position 1, it goes into the first position in the array, which is position 0, so we subtract one. And then we're using index i that goes from 0 to N-1, so we're ready to stick with the convention that we've been using. With 1 through N, we just add 1 to i. So that's an easy computation, one pass through, we can compute the inverse of a permutation. And here's a sample application, one of the simplest cipher mechanisms is simply, it's called a substitution cipher, is, first generate a random permutation of the letters A through Z. And in this case, we use a minus sign for a blank. And we'll talk about generating random permutation in a minute. And then we use that mapping to encrypt a message. So if the message, which is called the plaintext, is, attack at dawn, then the random permutation tells us that A should map to W, T to P, T to P again, A to W again, C to L, and so forth. And that gives us the ciphertext, and it's encrypted. We can send that ciphertext, and an eavesdropper couldn't figure out what the plaintext is without knowing the random permutation. So that's a simple cipher system. And now, but the receiver of the message, in order to be able to understand what the message says, has to have the inverse of that permutation. So that's a key that's transmitted, and they're generated in some other way. But in order to decrypt, we need the inverse of the permutation. And the inverse will tell us that W is supposed to go to A, P is supposed to go to T, and so forth. And so that's just computing the inverse as on the previous slide. And that gives a mechanism for converting the ciphertext back to the plaintext. So that's a very simple application of the inverse of a permutation. Now, actually, this type of cipher system is not so often used nowadays, because it always maps each character to the same character. So actually, an eavesdropper can figure out by the frequency of occurrence of the letters which letter codes to which letter. And actually not too difficult to solve a cipher system built this way from that frequency analysis. But it's useful as a maybe a piece of a cipher system. Sometimes we work with what's called the lattice representation of a permutation. We simply make an N by N matrix, and down at the bottom is a permutation. Say that's the example permutation. And all we do is, for each entry in the permutation, we put a block in the corresponding row. So the first column corresponds to 9, so we put a block in row 9. Second column 12, put a block in row 12. 11, 10, then 5, and so forth. So we have N blocks marked in that permutation, in that lattice, and that is a direct correspondence to that permutation. And then what's interesting, and it doesn't take too much thought to convince yourself that this works, is if you look at the columns that are marked one by one, the first column that's marked is 7, the second one is 13, and so forth. And if you just read off the columns that marked, what you get is the inverse of the permutation. So [COUGH] in the permutation, 1 maps to 9, and then in the inverse, 9 maps to 1. So that block is interpreted both ways. And in fact, if you take the transpose of the representation of the permutation in terms of the lattice, you get the representation of the inverse. So that's sometimes a useful or interesting way to look at permutations. And remember, when we introduced analytic combinatorics, we talked about the cycle representation of a permutation. So if student four winds up in room 10 and goes to room 10, he's going to find student 6 there, and student 6 is going to go to room 15, and so forth. Eventually, student 4 will find his room that way. So doing that for every position in the permutation, it's we saw that there's a set of cycles that's equivalent to any given permutation. And with that set of cycles representation, we're able to analyze interesting properties of permutation. I mention that because we're going to extend some of that analysis later on. And then the main tool that we used to study permutations when we introduced it for analytic combinatorics and today, the starting point is the symbolic method for labeled classes. And we had a number of combinatorial constructions that are natural ways to define sets of labelled objects, including permutations and permutations with restrictions on cycle lengths and other properties. And the symbolic method is a set of transfer theorems or a transfer theorem that defines a correspondence between a construction and an operation on a generating function. So when we build constructions, we get generating functions. So for example, one way to count permutations, using the symbolic method, we define the class of all permutations and the exponential generating function, which is each permutation Z to the size divided by size factorial, which is equivalent to a sum on N of the number of permutations of size N, Z to the N over N factorial. The combinatorial construction that creates permutations says that a permutation's either empty or it's the star product of an atom and a permutation. And that transfers immediately to the OGF equation 1 + zP(z), and that has the solution 1 / 1-z. And then the coefficient of Z to the N in that is N factorial. So a fine application of the study of permutation is sorting algorithms. Chapter 2 of our algorithms book has numerous classic sorting algorithms, and these things are very efficient, well studied, widely used, and extremely useful. And one reason that we've been able to develop them to the point where they're so efficient is that we have mathematical models based on permutations that help us understand them. And we saw examples of that in the very first lecture and second lecture when we talked about the analysis of quicksort and mergesort. And the key concept, as we saw, was, we need a model for the input to a sorting algorithm. And one thing to start with is to say that the inputs are randomly ordered. They represent a random permutation. The question is, is that a realistic model? And the answer is that absolutely it's a realistic model if we just apply a random permutation to the input before the sort. So the input might not be in random order, in this case, it's in reverse order. But if we randomly permute it, then we absolutely have a situation where we're sorting a set of items that are random permutations. So the model is exact. So if we study properties of random permutations, then we get properties of our sorting algorithms, and that's what we're going to be doing. That's what we did for quicksort, and we'll do for several other sorting algorithms today. So in order to do this, though, you need to be able to generate a random permutation properly. And actually, at the beginning, people would get this wrong and generate things that looked like they were randomly permuted, but actually did not generate each permutation with equal likelihood. So nowadays, we use a method articulated by Knuth and probably earlier. We just go from left to right and exchange each entry with a random entry to its right. So there's a two liner or four liner, five liner to generate a random permutation. i goes from 0 to n, we generate a index r that is somewhere between i and n minus 1, between the current position and the end of the array. And then exchange the element at position i with the element at position r. So again, if we start with this input, maybe that's in reverse order. And the first case generates a index that points to N and exchange T and N. And next time, we're going to exchange S with L and then T with R, and then R with P, and so forth. So each time, the element that gets picked is picked at random from the ones that have not been chosen yet. And continuing that process, we get a random permutation of the input arrays. Where if we just want to generate a random permutation, just start with 1 through N as the input, and you get out a random permutation. And this process generates all permutations with equal likelihood, and that's easy to see. The first entry is equally likely to be any one of the N entries. We're picking any value from 0 to N minus 1 at random, we could get any one of them, so there's N possibilities for the first entry. Similarly, there's N minus 1 possibilities for the second entry, and so forth. So there's a total of N factorial different permutations that are possible to be generated, and they're all equally likely. That's the basic properties of permutations. And next we'll go into looking at analyzing some of them.