JUNE 14, 2011 INET NEW YORK ****** 15:20 Panel: New Privacy Models I'd like to introduce Markus Kummer, VP of public policy at the Internet Society. He recently joined the Internet Society in February, and advances key policy positions on issues including privacy, cyber security and network neutrality. Previously he was the executive coordinator of the Secretariat supporting the U.N.'s Internet Governance Forum and has extensive experience with Internet policy at the global, regional, national levels. >> MARKUS KUMMER: Thank you for the introduction. Good afternoon. It's my pleasure to chair distinguished experts. The title is rather long and complex: New privacy models in an age of mobility, social networking and clout computing, how can we encourage a new technology approaches that balances on-line privacy, security and reliability while also fostering innovation. Is this possible? There are more questions following that. So I will not go into that. Luckily I have experts who will be able to explain what it means. Let me say a few random thoughts, when introducing this discussion. First of all, what is privacy? And I would like to remind the audience that we don't have a universally agreed definition on privacy. There are big cultural, social and I think generational differences in regarding privacy. I come from Europe where, on the whole, the view is held that European standards of privacy are much higher than American standards. So I'll be interested in hearing from the panelists how they see the situation. Nevertheless, there are the rights to privacy is recognized, is the universal declaration of human rights, which means all the nations of the earth, all members of United Nations have subscribed to the universal declaration of human rights, and article 12 specifically recognizes the right to privacy there. No one shall be subject to arbitrary interference with its privacy, the article begins. However, we heard this afternoon, I think Vint said it, privacy is hard to come by nowadays (inaudible) is dead, get over it. There are (inaudible) tension in this field. But without further ado, I would like to introduce our panelists. To my right is the Internet Society's Lucy Lynch, director of trust and identity initiatives. She is our foremost expert on these issues. To my left we have Rebecca Wright, professor of the department of computer science at Rutgers University. Then I go to her left, Jonathan Cannon, director of information security policy and strategy, at UPS information services. And to his left, Stephen Hughes, chief information security of Citibank of North America. And to the right of Lucy we have Leonard Gordon, Director of the Northeast Regional Office of the Federal Trade Commission. We thought of starting with a more theoretical approach, first by Lucy, and then go to the regulatory framework, and at the end, we hear from the practitioners where they see the challenges and where they would like the privacy discussion to head for. Please, Lucy. >> LUCY LYNCH: I'll be brief. I am director of trust and identity initiatives. That is really technically implemented, trust at the network layer and above, but identity in its broadest context meaning socially managed, user managed identity; in particular, on-line. Privacy has been a driving issue for us this last year, as it has been for many people. And I was glad to hear somebody use the phrase, wicked problem, this morning, because privacy really is a wicked problem in the classic sense of the nine or ten elements of a wicked problem. Markus has mentioned that there is no standing definition of privacy. But we have been using shorthand definition of on-line privacy that is derived from the OECB privacy guidelines that were recently revised. The definition that we are using for on-line privacy is sharing data in an explicit context with an expectation of scope. That seems very very brief. But it's really about a transaction of data, not a one-way flow. It's about an expectation, about a specific context, so it's about sharing that data with some parameters around it and for a specific use, and it's about expectations for use. So that is where you get into things like consent, agreements, revocation. There are a lot of complex issues related to privacy that you can pack into that little tiny short definition. As Markus mentioned, I think privacy lives in a (inaudible) with security and with reliability. If you think about the network and the way the network works, reachability depends on identifiers. Did a great job distinguishing between what identifiers and identities are, but identifiers are really important in the current context of privacy, because data correlation is possible in part because people surrender both identifiers and personally identifying information when they are on-line. All of those problems get wrapped up in what I think of as the convenience problem, which is ease of use. Most of you know that users will surrender quite a bit for convenience. How many of you are I (inaudible) users. How many of you read all 41 pages of the recent click through license? (Chuckles). How many of you just clicked through? Issues that I think are important, as I said, data correlation, because it brings identity and identifiers together, data protection which would classically be thought of as security issue, but if I'm interested in my privacy and my data rests somewhere else, I'm highly interested in security of the facility where it rests. Transparency of terms, this is your iTunes click through issue. Consent, agreements, and what you agree to on-line, there can be tacit agreement. You use a service and you buy a lot of other things that come with the free. There can be active agreement, where terms are explicit and you accept the terms. There is a growing movement around negotiated agreement, things like project VRM, where people actually can say, I will offer these attributes or these pieces of my data in exchange for these escalated services. I think it's important to remember in the context of the conversation that we were having earlier today, that privacy is not the same as secrecy or anonymity or pseudonymity. Those can be tools of privacy, but privacy may require a high degree of proof, if you want to have a private transaction with your doctor with assurance on both ends that you are the patient and he is the doctor and the data is secure. I think wrapping all those items into one package and making it understandable to the end user, results in the wicked problems we call privacy. >> MARKUS KUMMER: Thank you for that. Over straight to you, Rebecca. >> REBECCA WRIGHT: Okay. I have some slides, which maybe are up there. Maybe no. Yeah, I'm actually going to agree with and repeat some of the things Lucy said from a slightly different perspective. Absolutely, I think one of the things that makes privacy such a hard problem is that we don't even know what we mean. We all mean different things. Each one of us doesn't know what we mean. It means different things depending on the culture, depending on the context, and it's much easier for people to see that their privacy has been violated in some way to tell you in advance what would make them feel that that would be the case. But at its heart, it's around appropriate uses of data, but there are questions of what is appropriate, who gets to decide what is appropriate, and when different stakeholders in some information disagree, which one of them gets to be the one that decides. I want to highlight and I could have given a whole one-hour talk on this or more, that simple approaches to anonymization, the idea that you can take data and strip out parts about people don't work. They particularly don't work in today's world, where many different data sources can be brought together and correlated to each other. And you can sort of use the clues and what you do know to find out who someone is, and then once you have that, you can use that to learn all kind of other interesting things about them. So it's tempting to just say, strip off a few fields, identifiers, obvious identifiers, but that unfortunately is not going to lead to good match, a good match between the statement of providing privacy and actual expectations that the users have and not getting that feeling that you have violated it. But nonetheless, there are good definitions for specific notions of privacy. I'm going to highlight a couple of them, that come from my world, as a computer science researcher, that works on fairly mathematical aspects of computer science. And when systems and people meet, mathematical definitions are not straightforward. I can define something mathematical, but if it doesn't capture that feeling of what people mean when they say they care about privacy, it is not a useful definition. But if you can find a definition that does have a good match to some of what people mean when they talk about privacy, it can be useful for capturing those particular aspects and using that to drive solutions, because the nice thing about a mathematical definition is, you can hope at least to, if you are clever enough and how you design your system and analyze your system, you can hope to say it is met or it is not met, and to know whether you have done that or not. One approach that I find very interesting, it uses cryptographic methods to hide information, while allowing certain computations to happen, is when you are dealing with data analysis, the usual paradigm is to bring multiple data sources together, take them all into one place, combine the data, do your analysis, then you have your results which is whatever kind of knowledge you were seeking. Instead, if you can use something called secure multiparty computation, and I'm not going to give you the scientific references, but the terms are enough that you should be able to find a rich body of literature on the web, this is places that needs to bring the data together with a secure distributed protocol, something that uses encryption, but lets me send you data that you can manipulate in a certain way to combine your data, and then we can still produce the same results that we want. This is not a panacea that is going to solve all privacy concerns, but is useful if the concern at hand is about that step of combining data in a centralized location. That lets you get the effect without actually combining them. Another approach, differential privacy, is a recent approach that has been put forth in the last five to ten years. Intuitively it is a very mathematical definition in flavor and it provides a guarantee that says when someone interacts with the database they are going to get the same results whether that database contains one particular person or does not. Essentially what this is saying then is that one individual, and this is something that should apply to every individual, can have the good sense of privacy because their data is not giving a significant contribution to the output. This is one that is not useful everywhere, but is very useful in cases where the claim that someone is making about privacy is that it's not highly dependent on an individual. People often say I'm only reporting aggregate results. So you don't have to worry about privacy. It turns out because of the data correlation that that is not actually strictly speaking true. It may be a higher bar, but someone can potentially disaggregate by using what they do know and combining different data sources. Here you get a strong protection for that, that one individual. If the kinds of results that you want to produce are ones that don't depend on single individuals like truly aggregate results or synthetic data there is a body of results here that can give you good methodology for providing that. Turning a little bit to a different direction, one of the things that I and others have been looking at recently is the extent to which accountability can be used to enhance or replace some of the needs for certain mechanisms and particularly around privacy. So we hear about accountability and need for systems that provide accountability and hear it in the real world, human context that are going to provide accountability around how people use the system. You can look at building that inherently in. Exact definitions are elusive. You want to make sure when people don't follow the rules, they suffer consequences. The fact that the consequences are there will act as a deterrent, but if not, they will actually be there. This does not inherently require or disrequire the idea of identification and tracking at all times. In particular it's most interesting when it does not require, but that you can still hope to build systems that provide some kind of accountability in that when someone does violate the norms or rules as stated, there is some mechanism for holding them accountable. This may be a useful shift, paradigm shift from privacy approaches based solely on prevention, because prevention is difficult especially when there is potentially ambiguity around what the expectations and context are. That is a bit of optimism, but pessimistically why are these not deployed? There are a number of real and/or perceived barriers that currently prevent these from being widespread in their use. They can be too complicated and difficult to use. That can be both from the perspective of the people, the technology developers, it can be from the perspective of the technology maintainers and managers. It can be from the perspective of the end users. All of those need to be addressed and worked with. There is concerns of efficiency especially around some of the secure multiparty computation where you are adding sophisticated encryption on top of things. Readiness for deployment, an issue on all of these, more research, but especially that transition from research to development is needed to get them deployed. Fundamentally, one of the underlying issues is potentially a misalignment of incentives, people who care about the privacy in a given setting are not necessarily the same people who are making the decisions. Certainly, policies can and laws can bring those incentives together, but that is not always the case. Back to optimism, for moving towards the future. There is no single best balance between privacy, security or reliability and usability that applies, both to the Internet infrastructure and Internet applications. I think the idea that there can be balance is in some ways a fiction. It changes contextually over time in different contexts. But I think thinking about both and trying to achieve both or all of these goals is certainly important. So there is a question in the charge of our panel about innovation, that implied that innovation might be hampered by privacy. I want to say that I don't think they should be viewed that way. Privacy and security should be seen rather than limiting innovation as requiring more innovation. I think we should see this as a call to provide better and more innovative solutions. It's harder maybe in the infrastructure, but certainly among Internet applications, you can use privacy or security as a differentiator. There is room out there in the marketplace for different kinds of solutions that provide different properties to different communities that wish to use them. Thinking about users and uses and usability is crucial. There are users not just end users but those who have to manage the technology, that certainly matters. Finally, I think critically technology policy and education need to work together, just moving one front forward alone is not going to make a difference. >> Thank you for that. I would like to throw in the word balance makes me somewhat a little nervous because it implies a (inaudible) all these elements should be mutually reinforcing in an ideal world. Let's turn now to Leonard. It says in your bio, you supervise the investigation of litigation of both consumer protection and antitrust matters. You have a lot to tell us, I suppose. >> LEONARD GORDON: Sure. Thank you for inviting me here today. This is an interesting and exciting event. I also have to disclaim that that which I'll say today represent my views and not necessarily those of the commission or its commissioners, especially if we get interesting Q and A later on. Listening to the discussion about privacy, I was struck by a comment that Justice Potter Stewart made trying to define what obscenity was, and threw up his hands in the opinion and said, you know it when you see it. That is what the discussion is like with privacy. It is very difficult to reach a societal consensus and that makes what we do in law enforcement challenging. I'll touch briefly upon some of the law enforcement efforts that the commission has undertaken recently, and also talk about the privacy report that the agency issued at the end of last year. Quick numbers, agency (inaudible) 64 do not call cases, 83 Fair Credit Reporting Act cases (inaudible) 15 spy ware cases and 15 children (inaudible) we have been active, but there is more to do. We had one against Twitter. This has nothing to do with Anthony Weiner, though you will see similarities. (Inaudible) privately disseminated. Twitter promised it would use (inaudible) to protect these private tweets. (Inaudible) passwords and tweets people thought were private became public. A little symmetry there with Congressman Weiner, but I'll stop them there. This is typical of the action we take. When companies promise consumers about privacy (inaudible) deceptive practice, we try to remedy that. We recently (inaudible) looked like a great product, allow parents to monitor their children's on-line activity, block and filter Websites, record IM conversations, all kinds of wonderful things. I have two 11-year-olds and a 6-year-old. This is the kind of thing I'd be interested in perhaps. Eco metrics didn't disclose to parents the information they will be gathering through the filtering mechanism they were going to sell to third-party marketers. We thought that was wrong. The laughter that I hear throughout the room seems to think that you agree. Not banana people, but close, they were an on-line advertising agency, and they sought to differentiate themselves by promising consumers they could opt out of being tracked through the Internet. We thought that was great. And if you follow their software, it might work, but only worked for ten days. They didn't tell people that. We found that to be deceptive practice. Google Buzz, when Google launched Buzz, it gave gMail users two options, check out Buzz or not, I want to go to my in-box. We allege Google failed to disclose that clicking on sweet meant all the information is going to get sucked out of their E-mail. Didn't adequately disclose that. They didn't adequately disclose, even if you click not, certain information from your gMail account might be accessed by Google and used to populate Buzz. Interesting that we also had issues with their promises that they were treating information in accordance with the safe harbor privacy framework from Europe. Found that was not so. The consent decree there requires them not to gather information for one purpose and then use it for another. That is consistent with how the agencies looked at privacy in the past, but with a company like Google that is constantly evolving, that would be an interesting order provision for them to keep. They have to submit to monitoring and revamp their privacy program in a fairly significant way. Let me segue to the privacy report, and to get to the privacy report I have to explain about how we got there. The commission in thinking about privacy historically has used two frameworks. One, notice and choice, that is the consumer should be notified this kind of information about them that is being collected, and has some choice. But as Lucy mentioned, that framework is illusory at best. Apple's user license is 60 pages, no one reads them. No one can understand them. They resemble mortgage closing documents. No one reads the mortgage closing documents. We have seen where that ended us up in the financial crisis. Similarly though, people do not read, they don't understand consumers are not exercising meaningful choice and consumers don't understand the kinds of data that is being collected about them. And if notice and choice is the framework we are applying, where consumers don't have notice and don't have choice, that is not a good framework. If you look at the cases the commission brought, the one that was the tipping point was the case we brought against Sears. Sears rolled out a program called (inaudible) experience, something like that. They paid consumers $10 to enroll in the program. The program wasn't something to enhance their shopping on-line experience at Sears. It was spy ware. It recorded almost everything a consumer did on their computer, could access prescription information, banking information (inaudible) Sears didn't know what it was going to do with this data, but it seemed like a good idea at the time. If you read the license agreement, in the 70th paragraph, if you read it carefully and parsed the words, you would understand that that is what you were agreeing to for your ten bucks. But we felt that was not the way a typical consumer was going to react to this software for ten bucks while they are looking for hardware on the Sears Website and the offer was presented to them. There was never any allegation by us that the information was (inaudible) got to this fairly early. But it was disclosed. There was no consumer harm, but we felt this was the wrong thing to do. The other framework the commission used over time is harm. There, typically the harm has been financial. Credit cards have been compromised from a retailer or financial institution. There is a real risk that consumers are going to be out money or be out, inconvenienced, trying to deal with the consequences of identity theft. The notion is that too narrow a definition of harm, is there some kind of dignity interest or ick factor, if you will, having your on-line behavior tracked and aggregated without understanding that is happening. That was one of the motivating concerns for the privacy report. The privacy report grew out of public round tables and forums. We issued the report in December. We asked for comments. We got 450 or so. The agency is trying to work its way through those and figure out how the recommendations and the report should be altered. The report has three main components. The first is privacy by design. Businesses need to incorporate privacy into their business processes. It's a flexible standard depending on the kinds of data you collect, kinds of information you have about consumers. But it needs to be part of your business process. It can't be an afterthought. It has to be engineered from the beginning. Companies need to think about the data they collect and how they maintain it. Ease, electronic storage is wonderful, because it's easier to hold on to stuff than get rid of it because storage becomes cheaper. But companies hold on to more data. They don't have business use for it anymore, but it's sitting there. An ingenious hacker will figure out a way to get to it. Second, more meaningful consumer choice about their data, where it's collected, what uses are made of it. Third, greater transparency, consumers should have a better sense of what is collected about them. Here, a theme throughout, there needs to be better education. Consumers need to understand better that part of the bargain for on the free content of the Internet is they give up certain information about themselves. They can decide whether they want that or they don't. But there needs to be more education about that. With that I'll stop for now. >> MARKUS KUMMER: Thank you very much. Can we turn to the very left, to Stephen, to tell us from your banking experience. >> STEPHEN HUGHES: Thank you. Being with the financial institution, one of the largest in the world, for us there is two threads of, quite a bit of which has been touched on here. One is the area of privacy rights. Because we are global, we have to worry about over 100 countries. And it's challenging enough in the U.S. with, there is this constant escalation of, I think it's not content of privacy, but the definition and what constitutes a piece of data that is or is not private. And a good example of this is people in the U.S. are casual about sharing their E-mail addresses, but in some European jurisdictions you can't capture that in your system because it's considered violation of privacy. Some countries, you can't capture IP addresses, things like that. We struggle with that partly because so much of it is interpretive. Along with that is the fact that our customers more and more are moving to the on-line space. The convenience problem comes up, right? You want to keep their, besides the fiduciary responsibility, we are very focused on our reputation. And absolutely, key to that is our customer trust. So we don't want customers to think we are not doing everything we can to keep them safe, and then you have people who don't like the other measures that we put in place. Good example is, if you typically log on from your home or your office, and then you happen to be on vacation in Mexico, and we present some additional challenge questions, people don't like it. Related to that, frankly, is also the whole issue of what makes up your identity and how you protect that from the people who are trying to get your money. We are, because of our size and our visibility, we are constantly under attack, consistently, Citibank on-line, or egg.com are right up there at the top sites being attacked. And we do a lot of things in the background to try and address the fact that even things that people think are private are not necessarily. We had a case a few months ago where we found that some fraudsters set themselves up as a collection agency, so they could pull credit reports, because if they have your credit report, they have your mother's maiden name and that sort of thing. We do other things that are about anomalous behavior, but then that comes right up against the do not track type of legislation, because you are trying to build something, some data that tells you whether this is unusual behavior. For example, if I always go to the ATM in my neighborhood or certain places when I travel, and then one time they see me taking out $1,000 in a city I've never been to at 3 a.m., that is something that I want the system to alert people to. But again, this comes up against the growing trend on the regulatory side to protect people's privacy. It's definitely we are going in two different directions. Another quote from Mark Zuckerberg that I felt was somewhat amusing. He said the default now is sharing. I certainly don't think that is universal. And certainly not about your financial transactions, and not just banking, right? ECommerce, I think that there is a growing -- Citi is in partnership on the idea of the wallet, on your phone, great idea. People love it. But I don't think anyone wants that information shared. I think that ultimately, what would make this all much easier for us is if your identity didn't have anything to do with your personally identifiable information, if we had some sort of secure portable identity. I leave that to the technical geniuses to come up with it, because we certainly don't have it now. But the idea that I have to worry about who knows my social and who knows my mother's maiden name, it's primitive stuff when you think about it, and it's not difficult for someone who is dedicated to get. If you have enough in your account to make you a good target, and believe me, they know who has large balances, they can tell through other factors, as Rebecca was saying, there is a lot, fraudsters are doing a lot of data aggregation now. So they will get a little bit from here, a little bit from there, and they build up a profile, and that makes them want to try and attempt to take over your account, if you are a high network individual in particular. As I said, with the example of the collection agency, they can get enough information to spoof your identity. I just read an interesting article about, in Russia, at the ATM now, one of the banks is installing software that have a voiceprint, but not only does it recognize your voice, but it can tell if you are lying. This is, I don't think we can get away with that in this country. There is a lot of these innovative ideas. Ultimately we are going to need something. But right now, as long as we have these dueling factors where we have the public wanting to be safe, yet sharing information, and not understanding how much risk it can put them at, combined with the regulatory, not that I disagree with the regulatory drive, but it is extremely complex trying to deal with all of the regulatory bodies. I was just reading the NIST document. I think that is a great direction, but that's from this country. I don't see us getting to really viable solution near-term, that is going to protect people when they do eCommerce in every situation. Thank you. >> MARKUS KUMMER: Next, John that from UPS. >> JONATHAN CANNON: Much like was brought up, we have a foot in both worlds. We have a foot in the Internet, cyber world, where for purposes of convenience, and cost reduction, it's become very easy to move goods around the world. You may be moving that within the United States. You may be moving that cross border. It may be occurring in Europe. The balancing act is what I'll call it. I liken it to the seal on the ball who is trying to balance something. It changes daily. Whether it's complying with the new state directive that's become more specific, to data protection, whether it's a new interpretation of an existing European privacy directive, ultimately, those decisions which take a while to play out also take a while to manifest themselves from a practical standpoint in how all of us consume services, that the term of user versus consumer seemed to be a hot button today. I look at it as I have both. I have people who are direct customers. They are in fact people who tendered a package, and I have certain obligations, both contractually and legally to service them. And then I have users, people who may be interested because I told you, I'm sending you something, or hey, look for this, I'm giving you information which you are going to use to access the particular status of the package somewhere. So what obligations might I have in you coming to my site, and I'm collecting certain information about you to, number one, give you the experience you expect, but number two, potentially sell you additional services, let you know about particular offerings. So I think the area of privacy is highly subjective, as we have all discussed today. I think the challenge is going forward in the context of what do I want, clear direction. I think that in many of the other sessions that we have had today, there's been more defined understanding of what it is we are talking about. And as we opened up for this panel, there is no standing definition of privacy, and to Markus's point, what it's going to mean to a citizen in Europe and what it's going to mean to somebody in New York is going to be completely different. I take it a step further. The Mark Zuckerberg comment, the default is sharing, well, okay, generationally speaking, contextually speaking, that may be the truth. But unless I know something about you, I cannot determine any of those, any of that information. Right? You are just an anonymous person coming to a site to ask for information to use a service, etcetera. Until there is something known about you, until you are passing some sort of authenticator, then that is not even known as to what I may be able to prevent you. Right. I think that is part of our challenge, which is how do I ask you a question, that doesn't violate your privacy rights, that allows me to then service you in a way that you actually expect it. I think that from a technology perspective, from an implementation perspective, that is what developers are struggling with, when they read business requirements for a new product or service offering to say, okay, but I'm supposed to do all this functionality without actually knowing anything about the customer. It's really a tough nut to crack. I think the practicality of a new model really has to embrace some amount of information that is coming to you from an authenticator. And as Vint said earlier, at some point during the transaction, when the sensitivity of the information, the use of the information is going to cross some predefined threshold, or gaining your consent in order to conduct the rest of the transaction. >> Could I follow up on that? That is one of the things that is mentioned in the privacy report, is that things like first party marketing, water fulfillment, transaction authentication, you shouldn't have to seek consumers' consent. Consumers need to be educated, or we need to come to a societal consensus that by doing that, you are consenting to give your information. That is the only way those kinds of transactions can work in a safe and secure environment. What do not track is trying to get at is gathering information from your on- line web activity, aggregating that information, selling it to advertisers who have a profile of you based on your on-line activity, from various sites, not just when you are going to the UPS site to check your package, or your Citibank account, to find out the balance on your credit card, but rather all your web browsing. You leave breadcrumbs is the word they are using down there. And they are aggregated, and analytics get more powerful and those profiles are used by advertisers. One of our real concerns is that not only are advertisers ultimately going to have access to that information, but at some point, given the robustness of that data and the power of the analytical pools, others, whether they are bad guys or whether they are insurance companies or employers, anybody who wants to know something about you at some point, there is a real fear that the collection of that data will be available to others, and when you go to apply for health insurance, you figure out whether you have been looking at fatty recipes on the Internet. They do not (inaudible) aggregation of all the places you visit on the Internet, and the profile that is accumulated by all those interactions and used by advertisers to target you. In many ways it is not targeting ads to you that is of concern. It is other uses of that profile that has been created of you that might be used for other purposes. >> To follow up on that, I think that the other confusion that is out there regarding the privacy report, and where we get on divergent paths, is, I read it in somebody's blog the other day, the discussion of privacy versus security, and personal privacy means nobody knows anything about what you are doing except the other part that you have authorized. National security is full transparency. The world we live in today is pushing on that, where from a profile perspective how do we from a commerce perspective or we from a nationalistic protection perspective know whether you are a bad guy, or whether your intentions are in fact not just for commercial transactions. I think that is where privacy is finding its first challenges in the practical implementation of fraud detection, fraud prevention, screening, things of that nature. >> No, I mean there is clearly a tension between privacy and security. One of the hopes of the privacy report was to spur discussion about this. The commerce department report has done the same. But as society we need to discuss these issues. They are hard issues. They are not easy issues to resolve. First, the technology is moving at light speed. That changes constantly. It is not easy to have black and white rules. You have done what you are supposed to do. What you need to do changes daily for an industry. The lack of consensus on what is private, 16-year-old girls have a different view of what is private than 45-year-old men. Either way. But it's such a personalized issue, that trying to come to consensus is difficult. Certainly wouldn't want to suggest this is an easy issue. >> I'll point out one more complicating factor in all of this, which is that as identity technologies advance, one of the things that users are looking for and experience that they want is the ability to delegate, to authenticate once and to delegate to then second and third-party sites. You cross a fine line when a user wants to share with Flickr for something, but they want a set of conditions around what is shared on Flickr, and when you cross that boundary from one first party to a second first party with delegated identity, where the user isn't clear that they have now made Flickr a first party, there is a very complicated dance going on behind the scenes to make your experience seamless and usable. But it creates a set of parties acting on your behalf that you may not know you are dancing with. >> I think that is a very good point, because something we are struggling with in banking is the fact that, I read the other day that upwards of 40 percent of on-line transactions are actually performed by aggregators; for example, mint.com, things like that. And I think from a regulatory perspective, this isn't really clear. If my customer, a banking customer goes to mint.com or any of the others -- there are quite a few -- and they create their account, they then input their credentials, so that these sites can then -- effectively, they can't transact. All it's doing is screen scraping and bringing back their statement information. Nevertheless, I think there is confusion there, whether or not now that aggregator is in effect acting as a first party. I think that this, from my perspective, this is a challenge for some of the regulators that we talk with, because they say that's the Citibank site. No, it's not. We have no contractual relationship with them. We don't give them any information. Yet I understand that it is very confusing, particularly if one of those sites has a significant breach, because it's us, it's B of A, probably every on-line bank, their data is being scraped and brought onto those sites. It's a great convenience to the consumers. I have two grown children. They both use them. I won't. But people are. 40 percent of all on-line banking transactions is a huge number. >> Also looking to the audience, we hope to have a session as interactive as possible. Are there questions coming from the audience now? Mic at the back of the room. >> I submitted this on the back channel also. But I'll, I'm a big fan of the paper, the tragedy of the data common in which she argues that the inadequacy of reasonable anonymization requirements are overblown, that we can effectively anonymize data with, yes, some leaks, but it's all balanced and trade-off. Overall, with reasonable anonymization standards, we can open up an incredibly valuable and necessary set of data for understanding social problems and so on. I don't know if you are familiar with the paper. But if not, I highly recommend it. I think it's a well-balanced approach. >> I think it's a thorny issue. There are some really important words in what you said. I think the threat of deanonymization is real. All those cases where data has been deanonymized is because it wasn't anonymized. Something was done to it and people called it anonymization, but the things that makes that data interesting is things about people, and there are ways like differential privacy that I mentioned that can perhaps take some of that out. But I think the policy question, the social question, the question to discuss is when is it a good idea to say the value that can be unlocked by using this information is so high, that we are going to take some risks. But I think it's important to be clear that that is a decision and to be clear that there are risks, especially with things like health information, where there is such benefit that can be gained, but there is also a lot of really personal values and potential harm that can happen. I think downplaying the idea that it is a risk is the wrong approach, especially because the more data becomes available in the future, the more possible it is to do that deanonymization, even if I look at today's external data sources, even if I knew about all of them, might allow data to be linked to a particular person, that doesn't mean that standard can only go down in the future. I think it's really important to make explicit decisions, to talk about the benefits and risks, and then still get back to all the same problems about who gets to decide, which are more important, and how you balance those competing concerns. But I do think the risk of deanonymization is real. >> Other questions? >> I'm reiterating a question from earlier that wasn't really addressed about your thoughts on this anonymous currency system that exists, so-called anonymous currency system. Also, a curious question about a possible antithesis to Bitcoin, completely transparent system that would be governed by users, semi on the same principle as wikipedia, protecting each other's information through a distributed network, for maybe small transactions. What are your thoughts on Bitcoin as the anonymous currency, and then possibly completely transparent currency? >> That was a question for the banking. >> We couldn't use it. But I do think it's a fascinating concept, especially for small transactions. Certainly regulators will flip out at the idea of not being able to track the movement of currency. Obviously, it is another thorny issue. I don't have a strong professional opinion, because as Vint says, we can't touch it. But I think it is a very interesting concept. I don't think we would be allowed to. I could be wrong. But at this point we certainly couldn't be allowed to. >> Jay Sulzberger, corresponding secretary of LXNY, which is a little organization pushing free software in New York City. Your answer is absolutely fascinating. I'm not sure, may I ask you, sir, which bank you work for? >> Citibank. >> I know nothing about Citibank. However, there is something called, and this gets in the newspapers even, financial pages, where no information collected about the bank industry and Wall Street ever gets printed, but they even now admit since the blowup and subsequent fraud that there is something called a shadow banking system. There are huge anonymous transactions which directly affect basically many ordinary working people in the world. I don't think that the objection of the official banks quite to Bitcoin lies along the axis of we are not for secret transactions. Would you like to show me the ledger of your bank and which transactions it's performed? >> I think you are confusing banking with trading. I have no involvement with trading for at least six years. But the shadow banking that you are talking about is in the trading areas. Me personally, no. Certainly, from what we have all lived through in the last few years, I think it's a concern to everyone. But I want to clarify that it's primarily a trading issue. Certainly, hedge funds, things like that, not to point the finger at anyone, but there is enormous amounts of money that moves around without anyone looking at it. We don't have that luxury. Could be. I'm sure there is a lot of tax avoidance. >> Could we, before we move on to the next question, one more thing about Bitcoin. I think one of the most interesting things about Bitcoin is not the anonymous factor. The thing that I think is interesting about Bitcoin is that we have seen several representations now of Internet coinage. They have all been captive. You can put money into; you can't take it out. You can put money in Facebook and Zinga, but can't take it out. You can put money in the hub and exchange for services, but can't take money back out. The interesting thing about Bitcoin, to me, it is monetized in two directions. >> My name is Clea Hamlin. My handle is identity woman, for those of you wondering. I was going to ask Lucy to comment on how NSTIC relates to privacy, and if it does, because I like her definition the best. And we have somebody from NTIA talk about what they are doing. And then there's these privacy green paper from commerce, and do not track. And then there's NSTIC, and I'm concerned that NSTIC is putting forward strong ideals about what these systems could be, and the market is not incentive to move that direction because everybody wants identifiers that are linkable to track everybody so we can sell them more stuff without their awareness. >> I'm happy to respond, but Vint also wants to jump in and I would let him. Does everybody here know that the NSTIC is the national strategy for security identities on line? So with that level, do you want to go first? >> VINTON CERF: Okay, so I want to be able to face people in this response. Earlier on, when I talked about distinguishing identifiers from identity, I was thinking NSTIC. My interpretation of the NSTIC proposition is that it creates a competitive environment for high quality authentication of identifiers, not identity but identifiers. It is unfortunate that the term identity showed up in the NSTIC name, but I talked at length with Howard Schmidt and others who worked on that, and it is my understanding and belief that they were trying to foster a competitive industry in the strong authentication of identifiers. I would not for a moment argue that, therefore, we have no identity issues, because we plainly can bind identifiers to identity. But I think that is something that we as users would like to have at least some control over. Now, the problem is that once you have established in a strong identifier, others may accumulate information about your transactions, that are associated without identifier, about which you may not know anything. That of course argues for transparency. But I urge you not to condemn NSTIC in the belief that it's a national identity system or anything like that, because I don't believe that is the intent and I don't believe that is the substance. >> I would agree that in its pure form, that's the high level goal of the NSTIC. I think the devil is in some of the details. There are a lot of things that have converged in the NSTIC that I think are problematic. It's being billed as a private/public partnership to drive commodity-driven identifying solutions for end users. I think it assumed the degree of maturity in the identity provision side that doesn't exist yet. There are very interesting technologies, combination of the next generation of open I.D. and OAuth is interesting, existing SAML technologies that are interesting. Microsoft had a user centric technology called information card which has now feature complete redat least for the short term. None of the technologies converge in an architecture and system yet where you will be able to use the individual pieces as when you as a user need them in the context that I talked about earlier. One of the things that Vint mentioned is what I think of as content drift for the user. I use what I think is a low value identifier, because I want to post persistently to a blog, but that happens to be a nice handle. Then I begin to use it in slightly more high value transactions. I use it to comment on a government site, that's in the civic public space. The government tracks that. I then use it for something more. I register with National Institute of Health and contribute to a database and that is a very highly proofed identity. I as a user don't always have the context to know when I've slid up the context ladder in terms of tracking and proofing. I don't think this is a problem singular to the NSTIC. I think anybody who is looking at providing the end user with a way to manage their identities across context in a simple way, runs up against these problems. The NSTIC is more likely to fail around trying to drive immature business cases on a short time frame, than they are around their insidious plans for collecting all your data. I actually think they have design problems that are at a deeper level. >> Another question from the floor. We are beginning to (overlapping speakers). >> Eric (inaudible) chapter member. I'm interested in opportunity, I have questions about opportunities for class bias to creep into processes for verifying or authenticating an identity. In other words, the old joke about or caption about on the Internet, nobody knows you are a dog, but it implies a certain kind of neutrality. And in cases where people are saying, debating ideas, meritocracy, it doesn't matter if you are rich or poor, or whether you have a PhD or you are a high school dropout, you know. You sink or swim on the merits of what you have to say. I'm wondering if, depending on what kind of information you need to establish an identity, that is satisfactory to your purposes, well, what are the criterion for a satisfactory third party verifier? Do you need the person you are trying to verify, whose identity you are trying to verify, do you need him to have a bank account? Do you need him to have a diploma, a driver's license? In other words, what happens, what are the consequences of somebody who didn't have an opportunity to go to school? And have (static) doesn't have a bank account but somehow has a smart phone and is getting on-line. Is he on a level playing field with somebody who has all that stuff? (Pause) >> I've been involved in privacy policy for a number of years. Sometimes it seems like we get really complex in how we look at these issues. Sometimes we ought to step back a little bit. I had a presentation made to me by the topper association on privacy. A key driver to shape policy in this space, whatever you do in getting data from the consumer, you shouldn't surprise them. That is a good metric because if you understand that going in gathering the data, that colors how you ask them for permission what you are going to do with the data. It does that well. I wonder what your reaction is to that because we oversimplify things (inaudible). >> One of the panelists, on a possible privacy model where the amount of information that is pulled from a user would actually depend on the amount they voluntarily share on the Internet, like a cloud square where they can start to measure how much influence you have, and (inaudible). >> The class bias issue is fascinating. We spend time in consumer protection dealing with scams that target people who are desperate. They are on the brink of foreclosure, falling victim to mortgage foreclosure rescue scams. They are hopelessly in debt and falling victim to debt reduction credit repair schemes. Last week I was at a forum with immigration service, law enforcement, dealing with immigration scams where people who have immigration problems get scammed by people they think are lawyers or the government, because they don't have the tools, they are desperate. As more and more of the world is done through eCommerce, and the Internet, lack of education, some of these things, becomes an even bigger problem. A big push in the privacy report is that we need greater consumer education. There is a wonderful publication that the agency has put out, that is available for free. It is called net cetera, it is aimed at tweens essentially. It is a great tool for parents, teachers, community organizations to use with their kids. We teach hygiene in schools. Given the amount of life that now happens on the Internet, we need to teach our children Internet hygiene so they know how to protect themselves out there. The no surprise comment is exactly where the Federal Trade Commission is. We want there to be transparency, consumers to understand what data is being collected and what use is going to be made of it and for the consumer to have some control. The last question, I didn't have a.... >> Well, so one of the things that is fascinating to me about the class question gets back to one of Vint's comments earlier about the continuing needs for anonymity on the network. I think if I look at myself, and look at the transactions that I do on-line every day, 75 percent of what I do is identifiable only by the system information that I give. It knows my browser. It knows my IP address. It has that identifying information. But I surrender no personal data at all. Another 15 percent or so is stuff that I do where I want a persistent persona, so I comment on things or I log in to a site for a personalized experience or whatever. Very little of it requires a really high degree of proofing. I think in terms of the open Internet, when we talk about these privacy issues, we are really talking about things at the (inaudible) need for anonymity because there is a political consequence, need for high proofing for something that has high transaction value. We need to remember a lot of our experience is in the middle, and that middle experience doesn't need to be as subject to the kinds of discrimination you are talking about. The no surprise comments, I think that is exactly right. The one thing I would say is that past that surprise there is a place for the consumer to be able to change their mind. So the ability to revoke or to change terms over time I think is a more and more interesting question particularly as people delegate and then age out of uses and want to change terms of how they Internet. >> So let's see. I think the class issue is a really important one. I see that as coming under this umbrella of remembering that privacy means different things to different people in different contexts, and understanding their perception may be colored by their own experiences, and abilities and exposure. I think that is really important to remember. As far as no surprises, that is useful and mismatch of expectations, as defining what is privacy violation is also that way. There is one danger there to remember which is that one way to not surprise people or violate their expectations is to lower expectations, and have it be very clear all the time that you should expect nothing and no privacy. I think you need to make sure that that is not where the solutions get driven to any more than they are already being driven there by other factors. >> On the point of the class issue, I agree with what Lucy is saying. I think that ultimately, servicing somebody's transaction based on them being able to provide out of wallet type questions, those artifacts that we accumulate through our lives, that provide some credibility to an identity, I think in some way have to be separated to what they really are, which is identifiers. Ultimately, until you reach a point transactionally speaking, where you have crossed some kind of financial threshold, safety threshold, that you are retrieving previously disclosed personal identifiable, personal health, financial information, that is where the line has to be drawn. I think that does turn this a little bit into the haves and have-nots at that point. On the note, no surprise comment, I agree with Rebecca. I think lowering the expectation right up front is the way to go, because what may not have surprised you today, as we accumulate more and more information, as technology gets better, it becomes a realtime, facial recognition comes into play, all of a sudden you will be surprised what can be done with the same information a year from now, two years from now. That is certainly a challenge that we all have no take into consideration as we interact with sites ourselves and our user base. I guess just to the last question, I understood that correctly, what you are really saying is, is there a potential of using a reputational trusted score in order to protect or at least identify me as an individual moving through, through the Internet. I think that's a, it has been tried. I think there is different models that have met with different levels of success. I think ultimately, we get it back into the federation game. Who is in fact the authoritative source of saying they trust you and do you in fact trust them? >> On the class issue I think this is probably one on of the biggest challenges that my industry has certainly, because there is a huge part of the world that is what we refer to as unbanked. People that, I mean, good examples are East Africa, where they have in a very short time, have this enormous distribution of self, it's staggering and impressive and transformative. I think that something that our industries together are going to have to come up with a way to deal with the fact that these people cannot provide the sort of credentials that we in the west take for granted, things you are talking about, government provided credentials. We have to come up with a solution. Certainly don't know what it is, but there is a huge need. As we know, if there is a huge need, and it has a commercial impact, people are going to find a solution for it. I think with no surprises, interesting the comment about this net cetera. I sit on a task force that is looking at account takeover, which is a big issue in all financial services. And a big part of that is an effort to educate people, and it's not just young people. There is a, I'm constantly, when people find out what I do, the questions that I get asked just flabbergast me, that the average person is uninformed about the threat landscape that is out there. They are uninformed. They don't think of it. They don't have any, it's just frankly staggering. I think that a lot of education needs to happen for people in general. We talked before about the convenience problem, and that is part of the convenience problem. We push our business to put more information up on these Websites to make people more aware, and they are reluctant to do it because a lot of people don't have a positive reaction, when you tell them that if you do such and such, it's risky behavior, and may wind up with your identity being stolen. I think it is a couple years in a row, the largest reported consumer problem and it's not diminishing. Certainly probably those of us in the room are very aware of these issues, but I don't think that the average consumer or user is. I think the consumer user distinction is great, because we have the same sort of thing, where you have a customer, who is trying to make a payment to someone else and the person on the other end didn't get their payment, that kind of thing. So it's an interesting segregation between how you handle, they both have very valid needs for information. And yet you may not have any way of authenticating the user as opposed to your consumers. >> MARKUS KUMMER: Thank you. This is I think my main lesson from this panel, that much needs to be done to educate the user, the consumer. And the way to move forward I think is business alone can't do it, regulators alone can't do it. We have to work together in a multistakeholder approach. (Inaudible) way forward. May I ask you to join me in thanking the panelists. (Applause.) >> I'd like to thank Markus again, and remind you, please, fill out the evaluation forms, and there is a box outside to collect them. It helps us do better and bigger cooler things. Okay. (Pause.).