This video is about how to see mechanism design as an optimization problem. In particular, we can understand the task of doing mechanism design as an optimization that asks us, let me find the best mechanism according to some kind of definition of best that I'm interested in, given various constraints about how the mechanism has to behave and what kinds of things it's allowed to do. So in this lecture, what I'm going to tell you about is some of the common constraints that we impose on mechanism design. And some of the different notions of best that we try to optimize under these constraints. Let me begin with something that we've already spoken about a bit, truthfulness. But now, I'll define it in the context of transferable utility mechanisms. So a transferable utility mechanism is said to be truthful if it is a direct mechanism. And also, if it's the case that for all agents and for all valuations that each agent might have, an agent i's equilibrium strategy is to disclose truthfully so to make his disclosure equal to his actual value. Next, I'll say that a mechanism is strictly Pareto efficient, or for short, just efficient, if in equilibrium it selects a choice x with the following properties. For all joint valuation vectors that the agents might have, I notice that I'm not writing a v hat here. So, I'm saying the actual valuations the agents have for all actual valuations. And for all other choices that I might make x prime, it's the case that the sum of the agents' valuations for x is at least as big as the sum of the other agents' valuations for x prime. So in other words, a mechanism is efficient if it selects the choice that maximizes the sums of the agents' actual valuations or their actual utilities, disregarding the payments they'll have to make. Now you might notice that in game theory there's a notion of Pareto efficiency, which seems to be different than this. It doesn't talk about summing up valuations, instead it says something about not being Pareto dominated by any other cell in the game matrix and that seems pretty different. Typically, when one defines the game theoretic notion of Pareto efficiency, we make a big deal of the fact that we're not just talking about summing up utilities. We're talking about the fact that utilities are incomparable. Here it sounds like I'm saying Pareto efficiency allows me just to sum things up. So how is it that I'm getting away with that? Well, the answer is that if we think about including the mechanism as an agent. If we kind of include the person who's actually making these payments as one of the agents when I think about efficiency, then the payments kind of don't matter because they always sum up to zero across all of the agents. So basically, what’s really going to matter is the choices that get made in terms of the aggregate utility. If I'm ever picking some outcome that doesn't maximize the sum of the agents' utilities, it’s always possible to find a different outcome in which the agent's utilities are maximized. So there’s more utility to go around, and then I kind of shuffle the payments around in such a way that everybody is at least just happy in that situation. And the reason intuitively that that's possible to do is that there's more money on the table. And so it's possible to shuffle things in such a way that everybody is happier in that circumstance. If all that didn't make sense to you, then don't worry about it. What you really need to know for the purposes of mechanism design is that it really does make sense to think of Pareto efficiency as maximizing the sum of everyone's values, not their payments, just their values. Let me also note, we call this efficiency but there is other notions of efficiency that we might care about. We might care about something like computational efficiency, how long a mechanism actually takes to run. In cases like that, we call this notion of strict Pareto efficiency economic efficiency to distinguish it from those other notions. We also call it social welfare maximization for the obvious reason that we're caring about picking the outcome that really does maximize the sum of the agents' utilities. And I've said this before, but just to draw your attention to it one last time, notice that here I'm talking about the agents' actual valuations. Here's another constraint that we might care about, budget balance. I'll say that a transferable utility mechanism is budget balanced when it's the case that for all joint valuations that the agents might have. And where s is the equilibrium strategy profile that they follow, their equilibrium strategies given their valuations yield payments that sum to 0. So in other words, what I'm saying here is that regardless of what the agents' types are, in equilibrium the mechanism both collects and disperses the same amount of money from the agents. So the mechanism neither makes nor loses any money as long as the agents follow their equilibrium strategies. Now you might wonder why would I want the mechanism not to make any money? The answer is, it might be that I really want the money only to serve to get around impossibility theorems, and I really just want the mechanism to make good decisions. On the other hand, I might be running a mechanism and be perfectly happy to make money, but just be concerned about ever losing money. In that case, I would want a relaxed version of this condition called weak budget balance. And really the only difference here is I take this equality and I replace it with an inequality that says I want to make sure that in equilibrium it's never the case that I lose any money, but I'm quite happy to make a profit. And I can also make other kinds of variations. So just to show you another one, it might be too strong to say that no matter what the types of the agents are I never lose any money, or I never lose or gain any money, I might prefer to talk ex ante across the agency choice. I might like to say this mechanism is going to get run many times. I don't really care what happens one time after another, but I care about my kind of long run profit or loss. So, I want to say that on expectation over the agents' valuations, it's the case that I'm strictly budget balanced, or if I prefer, weakly budget balanced. Here's yet another constraint that we often care about, it's called individual rationality. And the idea of individual rationality is that I want to encode the idea that agents might have a choice about whether to participate in the mechanism. And so I want to make it, so formally they don't have a choice, right? Formally, they just have to choose some action in the mechanism and it's kind of too bad for them. And payments get imposed on them and they just have to pay whatever it gets imposed. But in practice, we might want to have a condition on the mechanism that says agents actually would've liked to choose to be part of this mechanism. And in particular, we want to say your expected utility for participating in the mechanism is no worse than your expected utility from staying at home. In other words, your expected utility is weakly positive. So for example, I might care about this ex interim. Recall that ex interim means that I know what my value is, but I don't know the values of anybody else. So taking that whole idea, that I would be happy to participate in this mechanism knowing my own value, but not knowing everyone else's value, here's how I would formalize it. I say for all agents, and for all valuations that that agent might have, I'm going to look at the expectation over everybody else's valuations, given my valuation. In other words, I know the distribution over valuations. And so, once I observe my value, I can update my beliefs over everyone else's values taking my own value into account. For example, our values might be correlated and so I might believe different things about your value once I know my value. So knowing all of that, I want to say that agent i's value for the choice that actually gets made in equilibrium, given both his own equilibrium strategy and the equilibrium strategies of everybody else, his value for the choice, Minus the payment that he makes is greater than or equal to 0. And again, notice that this is all in equilibrium. I'm not saying there's no way to lose in this mechanism. Maybe if everyone else does some terrible thing, then I could actually end up losing on average. But if everyone follows the equilibrium strategy, then no matter what my type is, on average across everybody else's types, I come out ahead or I at least break even. That's what ex interim individual rationality says. Of course, I might want something stronger, so ex post individual rationality says, I don't want to average over everybody else's types. Maybe I don't want to average because I don't believe that I actually know this distribution over everybody else's types. Or maybe I just want a stronger condition that says, I don't ever want to be in a situation where in equilibrium I would lose. So here, I get rid of this expectation over types, and I change this quantification over my value into a quantification over everybody's values all at once. So what I say is, for all agents and for all joint values that they all might have, it's always the case, sort of point wise in this set of values, that my value for the choice that gets made, minus my payment, is always greater than or equal to 0. So this is a stronger condition. If I'm ex post individual rational, I'm always going to be ex interim individual rational, but the reverse is not necessarily true. Here's one last condition to talk about, tractability. So far we've talked about economic conditions that talk about the nature of the choice that gets made by the mechanism, or the nature of the payments that get imposed. Well, here I want to say that these actual functions, the x function that chooses the choice and the p function that chooses the payments, are each computable in polynomial time. And turns out that some of the mechanisms that we're going to turn out to be interested in and don’t have this property. So it’s sometimes the case that the computational problem of figuring out what choice to make and how much to charge everybody can be a hard computational problem. And what this means is we might be worried that for large inputs, we just couldn't run the mechanism. We wouldn't have enough time on any reasonable computational device to find out the answer. And if that's true, then there's something wrong with the mechanism because we may worry that we just couldn't run it. So we might like to make a restriction that says, I want a guarantee that I can actually evaluate these functions tractably. And a common definition of tractability in computer science is the guarantee that something can be done in polynomial time. Okay, so much for constraints that we want to impose, now let's think about objective functions that we might place on our mechanism. Here's one, I might actually like revenue, so maybe I'm selling something and I want a mechanism that does the best that it can, it satisfies various constraints. Maybe it's going to ensure individual rationalities so people will want to participate, maybe it's going to be weakly budget balanced so that I know I never loss money. But then, maybe it's going to be truthful. But then once these constraints are satisfied, there's still going to be maybe a space of different mechanisms that would satisfy those constraints. And within that space, I might want to say that I want to pick the mechanism that gives me as much revenue as possible. So I'll call that mechanism revenue maximizing. And a mechanism is revenue maximizing when among all functions x and p that satisfy whatever other constraints it is that we care about, the mechanism selects those that maximizes the expectation over the agents' types of the sum of the payments. Where this denotes the agents' equilibrium strategy profile. Notice here, I'm using the theta notation for types instead of values, but the sentiment is the same. So the next objective function that we might care about is actually to minimize the amount of revenue that the mechanism collects. And this makes sense if I don't actually want the mechanism to make money, but I just want it to make a good decision. And the only reason that I have payments at all is that quasi-linear mechanism design is easier than mechanism design when I don't have payments based on impossibility theorems that you've already seen. So, and I might just prefer strict budget balance, that might be the best thing, but there are cases where strict budget balance is impossible to satisfy. So I might just want to minimize the amount of revenue that I collect. So, what I might do is make a constraint for a weak budget balance, say at least I never want to lose money, and then I want to collect as little as possible. So I can formalize this here by saying what follows, let me just note what you see here at the bottom before I go through the definition. Just for fun, I'm going to do this as a worst case over valuations rather than an expectation like I did in the previous slide. So I'll say that a transferable utility mechanism is revenue minimizing when, again considering all of the functions x and p that satisfy whatever other constraints we have. The mechanism selects those that minimize the maximum over the agents' valuations of the sum of the payments in their equilibrium strategy profile. So here what I'm saying is that in the worst case, worst from the point of view of the minimization problem. So in the case where the valuations cause me to collect the most revenue, I want to make that number as little as possible, and of course I could do this with an average as well. Okay, another thing I might care about is fairness. I might want to say my mechanism is as fair as possible. Now this might be something I want to say, but it turns out it's pretty hard to formalize, so let me give you an example. Consider these two cases and ask yourself which of them do you think is fair? So first of all, I might ask, what about a case where I charge all of the agents $100 and I make a choice that everybody hates? Secondly, consider a case where I charge all agents $0 and I make a choice that some of them hate and some of them like. Well, there's a sense in which the first one is really fair because it's the same for everybody. But we still sort of have the sense that the second one is better. We wouldn't want a sense of fairness that just talks about equality because doing something terrible to everybody doesn't seem like a good way to design a mechanism. So I'd like some kind of definition of fairness that nevertheless says that the second thing is better than the first thing. And there are a variety of different definitions, but here's one that I like, maxmin fairness. So this says, I want to look over all the agents and how they do in the mechanism, figure out who is the least happy, and then make that person the happiest. So I want to look at the person in society who gets sort of the worst deal in this mechanism, and then have things become as good as possible for that person. So I might recognize that there's going to be some inequality in my mechanism, but I'm going to focus on the worst outcome and make it as good as I can. So, I'll say formally that a mechanism is maxmin fair when among the functions x and p that satisfy whatever other constraints that I have, the mechanism selects those that maximizes the expectation over the agents' values of the minimum over the agents of their expected utility. So I want to maximize the expected value of the worst case. Here's the last thing I want to talk about, price-of-anarchy minimization. So before when I talked about revenue minimization, I was kind of relaxing strict budget balance. In a case where I couldn't really have strict budget balance, I said well, basically, let me take this idea of collecting little revenue and put it into the objective function, I can do the same thing with efficiency. So it might be that strict efficiency is impossible or is too costly in terms of other things that I care about in the mechanism. And so, I might put that into the objective function and say I want a mechanism that is as close to efficient as possible. So in particular, I may want to minimize the worst case ratio between the optimal social welfare that could be achieved and the social welfare that the mechanism actually does achieve. And this ratio is called the price-of-anarchy. It's the difference between the very best thing that could happen and the thing that actually happens when agents act strategically in equilibrium. So formally, a transferable utility mechanism minimizes the price of anarchy when, among all of the functions x and p that satisfy the other constraints, it selects those that minimize this ratio. So the ratio is the maximum over all of the choices that could be made of the sum of the agents' utilities, divided by. So this is the social welfare, this is the best social welfare that I could have. And I'm going to divide that by the sum over all the agents of the values that they actually obtain in equilibrium, for the choice that actually gets made. And I’m going to look in the worst case here over all values, and we look at the set of values that makes this as bad as it can be. And then I'm going to try to minimize this by choosing x and p that make this worst case price-of-anarchy as little as it can be. So those are all of the constraints and objective functions that I'm going to define for you today. Overall, you can kind of imagine just listing them all and essentially having an optimization problem that says, I want to find the x and the p that maximize or minimize whatever objective function I care about. Subject to various constraints on how x and p work in terms of things like budget, individual rationality, truthfulness, and so on.