Hi all, it's Matt again. So let's take a look at an example which will illustrate some of our transferable utility mechanisms. And in particular, let's think of a fairly canonical example where we've got a set of citizens and they're trying to make a decision between either undertaking a project or not. So this might be say building a library, building a new school, making some sort of investment if it's a, could be a club deciding on making an investment of some kind. So, there's a set of people and we'll keep track. So N is just our set of individuals. And now the outcomes we'll just keep track of as either 0 or 1. So 0 means status quo, you don't do anything. 1 means you undertake the project, make the investment, and so forth. And so the idea now is that the types represent somebody's private value in this particular example for the two possible options. So this is a valuation function. So they basically have utilities for doing nothing, and utilities for undertaking the project. And undertaking the project is going to involve, possibly, some taxes and other kinds of things, and maybe benefits. So each person has utility of doing nothing and then the utility for what happens if they undertake this investment. Okay, so for instance, one possibility is that somebody has a utility of 0 for doing nothing. We can normalize it to 0. And then if they did, undertook the project, this is a person who likes the project, they would get some 4 utils for doing this. So given that vi(0) is 0 and vi(1) is 4, we can just represent that by a vector (0, 4). So we could say that person has a vector of utility (0, 4), okay? Another possible utility, maybe there's somebody who doesn't like the project. That type could be represented by a 0 and a -2. So what this does is it keeps track of how much each person values not doing the project, how much they value doing the project. And so now a person's type is just represented by these valuations. And these valuations can be thought of also as transferable in the sense that we can just pay them to compensate them. So if you wanted to compensate this person down here for the pain that they're enduring for building this thing, you would have to pay them 2 utils in order to make them happy if you undertake this. This person may be willing to pay up to 4 utils to have the project built. Okay, so in this kind of mechanism now we can think of this as kind of a direct mechanism. People are going to announce their type. So for instance if we keep things simple and they're either types who like it and get 4 utils for it or don't like it and get -2, then there would be two possible type vectors, the (0, 4) type and the( 0, -2) types. In general you could have lots of types, but for this example, let's just keep it simple. And one possible mechanism would be to say, okay, look, if a majority of people announce (0,4), if a majority of people like the project, then we'll build it. If a majority of people don't like the project, we won't build it. So we'll just take a majority vote, and then we'll build it or not. So in terms of our notation of a mechanism, if a majority of people announced the (0,4) types, then x, the outcome is going to be 1. You undertake the project. And if we're not doing any compensation, then everybody gets a pi of 0. So basically, we just go ahead, we undertake the project, and there's no transfers or subsidies back and forth in between individuals. In contrast if the majority of people announce (0, -2), then we don't undertake the project and everybody ends up again, with a payment of 0. So this is a simple example, just a straight majority vote over the project. What's interesting about this mechanism? Well it's actually a dominant strategy in this particular example to announce your type, and better off voting to not get the project built if I don't like it and voting for it if I do. So truth is an equilibrium. But sometimes what are the downfalls of this? Sometimes some of the people get negative utility. So sometimes the project is built, and there's a bunch of people getting -2s. And also the choice doesn't maximize the total utility in this society, right? So this mechanism isn't good in terms of maximizing the total utility. It's not accounting for the fact that these people like it twice as much as these people dislike it. So one possibility would be to say, okay look, let's try a different mechanism. Why don't we try, instead of having a majority, if a third of the population likes it, then we'll go ahead and build it. So now we're trying to do something where, instead what we take into account is these people like it twice as much as these people. So we'll weight their votes twice as much. And we only need a third of the population to announce the v hat types, sorry, the (0, 4) types, in order to have the project undertaken. So we could change that mechanism, right? So now, again, people announce their types. And now if the number of people, let's let m be the number of people who voted for it, so the (0, 4) types. If that's bigger than a third of the population, then we go ahead and undertake the project. But now what we're going to do is we're going to try and also compensate the people who don't like it, right? So, there's a bunch of people suffering for it. So, what we'll do is the people who announced the positive, so they're the people voting for the project. They're going to have a payment that they have to make which is going to be n-m over m times 2. That's going to be enough to cover the payments that we make to the n-m other people who are voting against the project. And those people will receive two payments. So this is a mechanism where now we're allowing the people who are for the project to have a stronger vote, but then we're going to compensate the people who have pain inflicted on them. And otherwise, if we get too few votes for it, then we don't build the project and we don't make any payments, okay? So this is a mechanism which now is making the choice that actually is best for society in terms of the over all maximization of total utility. It's also one that compensates the people who are receiving pain for it. What's the difficulty with this particular mechanism? Well the problem with this one, it does maximize total utility. You never get a negative utility, but now we've lost the fact that truth is an equilibrium, right. Why have we lost the fact that truth is an equilibrium? Well, let's suppose just for an example, think of a really, really large society. And suppose you're a type that is a (0, 4) type. And suppose for instance there's half of the people who you think might like it, half might not like it. Well, if we have a really large and everybody else is honest, then the chance that my announcement is going to make a difference in the undertaking of the project is really vanishing. So the larger the society is then the chance that I have an impact on the vote, for instance, if people's types are drawn independently becomes really tiny. And so then all that happens is now my type is going to affect my payment, right? So it affects whether I'm making a payment into the system or getting a payment out. And we're going to lose truth in equilibrium because I realize my vote's not going to make a difference. Now if I'm a (0, 4) type, maybe I pretend to be a type who's receiving pain from the project so that people make a payment to me instead of me having to make a payment in. So you can begin to see the different mechanisms in these kinds of settings are gonnao have very different properties. Some of them might be good in terms of truth. Some might be good in terms of maximizing overall utility. Some might be good in terms of compensating people, balancing so that payments are balanced across a society. There's going to be a whole series of different conditions we are interested in. And the question is, can we find mechanisms that are going to have nice incentive properties, nice efficiency properties, and nice balance properties and so forth? Can we get all of those at once? And we'll take a look and see that some mechanisms come pretty close to doing that. It's not always possible to get everything, but in a lot of contexts, we'll have mechanisms that do pretty well.