Abstract: Public Key Infrastructure (PKI) has now passed its first decade of activity on the Internet, but has yet to break out of pathetically small revenues. Reasons and factors contributing to the failure are many and varied. Is it that the PKI is a solution looking for a problem? That it doesn't solve the problems that it claims? Or that it is too expensive? Perhaps the military objectives failed to cross-over to the commercial sector?
This review attempts to list all of the issues that are unresolved, contentious, or questionable, at least as they are known to this author.
This paper serves as an attempt to broadly but briefly catalogue the list of serious issues that are unresolved with the concept of Public Key Infrastructure  . The catalogue was started in 2000, and has grown as new issues and new references to those issues have come to light. Over time, it has expanded into a fuller survey of criticisms of PKI.
As a working draft, it may one day see light as a published document, but for the moment remains a living document. Need to add in the following:
SPKI is another system of note, but I am unfamiliar with it. It attempts to resolve many of the flaws of PKI.
It is commonly assumed by PKI proponents that any criticism of PKI is simply a prelude to unwinding and removing it. This is not the case with this present document, nor necessarily in general.
Although there are many, including this author, who point to other methods as more efficacious in protecting user interests, there is a wider mission here: security.
In order to protect users, we deploy the tools that we have to hand. We do not build our tools from scratch because such an approach is simply too expensive. Hence, the need to appreciate and work with tools that are not ideal is the norm; and to pretend that these tools are ideal is simply unscientific. In an academic setting, pretence of ideality is unprofessional, in a security setting, it may well result in losses and thus be negligent.
In order to seriously deliver security to users, we have to seriously understand the weaknesses of every tool. We have to strengthen the weak areas and balance the costs with the strengths, not paper over the cracks and sell on strengths that are irrelevant.
PKI is no exception, and as we are now faced with a security situation of grave concern on the Internet (phishing, viruses, malware and identity data loss) the security community can no longer afford the luxury of blindly supporting a technology just because it sells.
For these motives, the ensuing discussion is overly aggressive, and deliberately so. By aggresively focussing on the weaknesses of PKI, I intend to surface those weaknesses so that we can all see their possibilities. I leave it to others to defend and balance these weaknesses, and to present the strengths in PKI.
This present paper does not propose removing or unwinding any PKI or any aspect of PKI. What it attempts is to lay the groundwork for future work, hopefully on a more scientific and security-oriented basis. If the reader chooses to repair their PKI, so be it . Likewise, enhancement is a possibility and many good suggestions have come out of the current debate on phishing.
It is far easier to repair and enhance a tool when the flaws are laid bare, and that is what this paper is about.
Where possible and known and acceptable the original author of the issues is listed. No serious research into these references has been done to date, and any errors remain those of this present author.
The following have by one means or another contributed to the work in this paper: Nicholas Bohm, Stefan Brands, Ian Brown, Roger Clarke, Don Davis, Carl Ellison, Dan Geer, Simson Garfinkel, Brian Gladman, Mark Granovetter, Philipp Gühring, Peter Gutmann, Frank Hecker, Robert Hettinga, Gervase Markham, Gary Howland, J. Marchesini, Eric Rescorla, Ron Rivest, Bruce Schneier, Adi Shamir, S.W. Smith, Nick Szabo, Anne & Lynn Wheeler, Bryce Wilcox, Peter Williams, Jane Kaufman Winn, M. Zhao, and some sources who preferred not to be named.
A commercial CA protects you from anyone whose money it refuses to take. Matt Blaze, RSA 2000 Conference hallway discussion .
The contracts underlying the business of certificate authorities are aligned towards selling certs, not towards nominal purpose of providing something to users  .
Further, the economic model for business adoption of PKIs is skewed to very high costs up front, for claimed benefits in the future . This advantages the seller of PKI technology, at the expense of high risks of project failure for the buyer. As few PKI projects have demonstrated return on investment (ROI) this risk remains serious.
In the retail side of PKI for secure browsing (SSL), Certificate Authorities happily re-certify the same key every year, yet over time the risk of key exposure results in decreased security. Further, they will happily sell a certificate for multiple years, at multiples of the one year price. This might indicate that all their costs are based on revocation management, or it might indicate that they are more oriented to generating revenue than delivering security.
If rollover of keys were a service to offer, then a shortterm key rollover service would make some sense. Instead, nominally relying parties are supposed to update from certificate revocation lists (CRLs) yet this is rarely done.
Alternatively, if security were a priority, issuers could issue subordinate authority certificates to server operators, and these local root keys could sign new operational certificates on a regular basis. Yet, this would result in companies that have hundreds of servers only needing one certificate, ruining the revenue model.
In summary, it is difficult to reconcile the business practice of the Certificate Authorities with security needs. It is far easier to reconcile with commercial needs. Modelling the CA as a seller of certificates is compelling; modelling it as a security provider raises far too many questions.
In contrast to the above, much of the infrastructure of SSL and PKI comes from open source projects such as the OpenSSL and Mozilla projects. In these projects, volunteers do the bulk of the coding.
Although one might expect that this would result in work done to the public benefit, in an open fashion, the reverse is more true. Most of the work done on PKI is financed directly by companies that have a large vested interest in the process. Often, these are direct players in the PKI business, and in this way, the open source projects have their security areas 'captured' by these elements.
This shows up in security groups that claim to be open but reject 'foreign influences' and in mission statements that contradict security practices.
"You can't punish a key. What would you propose doing? Lop a bit off?" Steve Kent .
What are we relying on anyway?
Robert Hettinga makes the point that the only value in a certificate is the value that you get when it goes wrong; yet it is not clear that a certificate authority would or ever has paid out in a fraud, and even the CAs agree when pressed that the warranty programs are more emotional reassurance than substantial backing. Consider this session in which Gervase Markham grills goDaddy, a well known CA :
Gerv: "So under what circumstances might you pay out?"
goDaddy: "Well... you are covered if it's through our negligence. So, for example, if the encryption failed for some reason."
Gerv: "The encryption failed?"
Gerv: "But if that happened, then everyone's encryption would fail, the entire Internet would be insecure, and you've got a massive world crisis. Are there any less apocalyptic scenarios where you might pay out?"
goDaddy: "Well, not really, no."
Gerv: "Have you ever paid out under the warranty program?"
goDaddy: "No. It's really there just to reassure you that it's a true 128-bit certificate, and to make you feel better about purchasing it."
The CA is correct of course, and is obviously aware of Adi Shamir's 3rd law: Cryptography is typically bypassed, not penetrated . From which we can suggest that the system is better modelled as having no warranty at all, as there is no intention to actually pay out and no plausible scenario where a payout is likely. Indeed, the amounts on offer in a certificate's guarantee are not high enough to make a difference to modern day fraud figures, and often inappropriate to reliance by users.
ArticSoft states it more forcefully :
Relying party liability. Get real. If we are stupid enough to put this one in front of our finance people they will shred us faster than you can say Sarbanes-Oxley. The only people we can rely upon is us, unless we have it in writing. So what we need is a system where we can switch on and off who we are willing to do business with whenever we choose. We certainly don't want to be left letting them tell us if they can do business, which is what the whole relying party/revocation approach is all about;
See below for the fallacy of assumption in the One CA model implies One Risk Model, and the discussion on outsourcing Outsourcing Trust, versus Outsourcing Risk in Section 3.
The United States General Accounting Office (GAO) provides some basic costs from Federal PKI experience :
As of October 1999, GSA made awards to three prime contractors to provide a range of services to any agency wishing to implement PKI technology. At the most basic level, the contractors can provide digital signature certificates to agencies without their having to develop their own PKIs. For each certificate, agencies will be charged an issuance fee - which varies depending on which ACES contractor is issuing the certificate and that currently could be as high as $18.00- and a transaction fee ranging from 40c to $1.20 each time the certificate is used. Agencies will have to determine which applications are best suited to use ACES certificates. For example, GSA officials have stated that it would probably not be cost-effective to use ACES for less sensitive, high-volume applications, such as electronic mail.
A New Zealand government report also warns that :
"Based upon overseas and New Zealand experiences, it is obvious that a PKI implementation project must be approached with caution. Implementers should ensure their risk analysis truly shows PKI is the most appropriate security mechanism and wherever possible consider alternative methods."
PKI has an extraordinarily long history, as computer systems engineering goes. It traces its roots from the seminal article of Diffie and Hellman that introducted public key cryptography  ] and thence through the Bachelor's thesis of Loren Kohnfelder .[ PKI could be said to be the brain child of those early writings.
Every engineering system of any great size is based on a set of core, foundational assumptions, and for PKI, these are found from the above theoretical writings to be : [
The process of great projects is so complex that such pervasive assumptions are generally internalised by the vast majority of the people within, which means that these incumbents are not reliable guides as to their veracity or efficacy. The wise architect knows this, and works with it by documenting, surfacing, and testing them. Clearly, PKI was pre-Internet, leading to the question: how did the assumptions survive the arisal of the net?
Unfortunately, every one of them is shown to be not only flawed, but completely broken; in each case the opposite is more true as we will show.
The assumption that the user needed to authenticate the source of her counter-party can be tested empirically. History has failed to confirm this assumption, consistently and persuasively.
Firstly, for normal user practice, inside and outside business, with and without requirements for reliance, indeed in practically all significant areas, unauthenticated traffic has been and continues to be the norm. Or, perhaps better said, the benefit of any authentication over the costs of implementation did not drive significant adoption, and companies preferred to use other methods than certificate-based authentication (banks insist on signed faxes, lawyers add disclaimers, etc).
Secondly, in spite of the claim of authentication being best provided by digital certificates, e-commerce grows and grows without it. The vast majority of protective and authentication methods are based around a password and a user-supplied name or a server-supplied number .
Thirdly, spam is clear evidence of a need for authenticated email. The successful response to the invasion of the user's email box has been the development of filters, and efforts to use authentication have failed. It appears that authenticating oneself has little to do with the content of an email, and commercial spammers adopted to the notion of adding special features far more readily than did the consumers that were meant to be protected.
Fourthly, if there is anything that cries out for the requirement of authentication, it is phishing. Indeed, phishing seem to be a near-perfect match, both for authenticated email and for authenticated browsing. For the first time, an Internet-only fraud faced the user. For the first time, direct losses faced the user. And, these losses were directly related to the user's ability to authenticate email and web sites.
Yet, again as with spam, and to the eternal shame of the supporters of PKI, the industry failed to marshal, prepare and promote any defences to stem user losses, let alone admit the existance of the threat. Indeed, it must be noted that the engineering in place is poorly equiped to deal with phishing .
Clear needs for authentication exist on paper, but none of them caused a signficant adoption. We may still say that authentication could be useful, but the assumption that authentication must be provided is clearly wrong. Therefore, this assumption can only be met on a needs-basis, and on a user-choice basis. It follows that it cannot be a mandated nor an exclusive solution.
Evidence mentioned above to address the assumption to authenticate everyone also attacks the assumption of a centralised independent and trusted party. If you do not need to authenticate, then you do not need that TTP either. Yet there are more reasons why I would not want the TTP, and these are addressed here.
The TTP is not the only possibility, nor the ideal party. Two party authentication works well enough for most circumstances, and it works better than TTP for most circumstances. Further, some will only trust two-party authentication. For example, a bank will only use its own methods to authenticate its customer, and will extend those methods to the net, rather than bring in a TTP such as a CA. See Patterns of Commerce.
The introduction of the trusted third party (TTP) is an a priori weakness because he adds an additional party to the protocol (complexity) and an addition point of attack  [Trent]. Indeed, as he has total power over issuance of certificates, he is now a major source of weakness, reflecting great stress on the word 'trust'.
The expansion of complexity is not just in the numerical sense of from two parties to three. As well as being a source of technical weakness, the existence of the TTP requires sophisticated techniques of governance - standards, best practices, auditing - to be brought into the security model. There are very few observers or critics capable of isolating dependencies between the governance side and the technical side, and vice versa, and then closing the loop on those requirements. In more specific terms, one would not look to an audit firm for advice on cryptography, and one would not look to cryptographers for advice on auditing. The result is that the practical extent of the PKI system is strained beyond the plausible limits of comprehension.
These above factors will result in lowered security and need to be balanced against any benefit in security gained by the presence of the TTP. Indeed, as he has total power over issuance of certificates, he is now a major source of weakness, placing great stress on the word 'trust'.
This alternate logic has been expressed in the term Centralised Vulnerability Party or CVP. Instead of the marketing hope that you trust the TTP, you the user are forced to trust. You have no choice, and are vulnerable to this central, all-powerful party.
Within the PKI standards, there is an inbuilt and deep assumption that the root or trust anchor does not operate on itself. In practice this means that even though a root might be delivered as a self-signed certificate, that signature is not checked by the signature validations on a subsidiary cert.
From a logical point of view this makes sense, as a signature on ones own key has little merit in cryptography or security at a higher level. However, there are dangers here; the self-signature has many engineering ramifications. For example in the PGP world, this was required to eliminate a potential attack. In the PKI world, using self-signed certificates is indicated any time the infrastructure and software is to be used where trust should not be outsourced.
One of these engineering ramifications is the single point of failure. As the root cannot operate on itself, it cannot revoke itself. There is then one simple attack that cannot be dealt with which is to compromise the root. Rather than deal with this by simply permitting an engineering solution of revoking the root and thus addressing the single point of failure, PKI takes the logical path and states that this is not possible.
This has lead on the face of it to a very strong claim that the root must be protected at all costs. Offline protection, secure hardware, trusted parties and the full weight of governance designs are found in the makeup of the Certificate Authority, reflecting the need to deal with the full ramifications of the single point of failure.
This raises costs significantly; stating that the root must never be compromised immediately creates a very expensive requirement, and feeds directly into barriers to entry (c.f., Porter), which practically guaruntee that the market for certificates becomes a stagnant protected market, even before the first player has got off the ground and becomes profitable enough to think defensively.
There is one more effect that is significant. A single point of failure has important ramifications in finance, military and government sectors. Large, slow sectors that face intensive external scrutiny do not in general accept single points of failure. In a sense, any sector that thinks about disaster recovery would be a poor fit for PKI. This unfortunately results in a clash of revenue models, because such sectors are often the only ones that can afford the highly expensive protections needed by the single point of failure.
PKI's supporters were inspired by a store and forward model of electronic mail. In this model the user connects to her single mail server, downloads all her incoming mail, sends out her (previously written) outgoing mail, and then disconnects. While reading the mail, she has problems authenticating the source of the mail, and must use an offline method unless she wishes to incur the expense of connecting again. Certificates seemed to be ideal for that purpose.
This core assumption of offine mail was promoted by various telecoms and postal committees examining the potential to offer store-and-forward electronic mail systems. In technological terms, this was UUCP, and in business terms, a single agency would sell a gateway service to the user for her sole access to communications. Charging could be on the basis of the email, if only they could keep control at the packet level. Diffie and Hellman's description of a single directory to identify the world's people and services meshed nicely with that centralised world of monopoly postal and telephone providers.
The Internet broke that dream. The assumption of offline email disappeared with the success of the internetworking protocol family known as TCP/IP. These protocols assumed an always online mode, and explicitly sought to deprecate the store-and-forward model . The offline model was promoted throughout the 1980s and into the 1990s (UUCP, ACSnet, Janet), but the challenge was defeated by the Internet, especially when cheap high-bandwidth telco innovations such as DSL and cable finally arrived.
Further, although the world did migrate to an ISP model of single gateway for the user, TCP/IP broke the charging paradigm by several innovations: (a) being an ISP was an open business, (b) once online, a user could connect to any other place for any purpose, and (c) the user's activites were obfuscated in byte-wise transmission modes. These bypasses to visibility and control killed any ability to discriminate in marketing terms.
The world went online, yet the the certificate model was designed around an off-line assumption. As above, recall that the business case was built on selling reliance (whatever that means) and real reliance is also realtime, or, in otherwords, online. The only thing that was offline about the PKI business was the engineering model, and it was totally at odds with the real requirements that PKI needed to meet.
The need to authenticate everyone created the need for Reliance and this created the need to define a limit to risks and liabilities. The obvious question then was how long could a user rely on a certificate, which led to attention on the expiry and the revocation of that certificate.
Expiry and key revocation is an unsolved / unsolvable problem. For example, ArticSoft points at the difficulty of trusting an external party to revoke users internal to a business, and to be verified in doing so .
And in the technical domain, Ron Rivest pointed out the difficulties in staleness . If the certificate has an attribute of staleness, and a relying party really does need to rely on it, then it would need to be brought up to date before the cert could be considered safe. (Imagine here, a long polemic on just how fresh a cert needs to be, which is an unanswerable question.)
Ron Rivest's closing remarks in a panel at FC99 were to the effect of, given all these problems, you may as well do online verification, and dispose of certificates altogether . Bohm, et al, seem to concur :
Their widespread use would depend ... on achieving practical solutions to many unsolved problems connected with expiry and revocation of certificates.
The problem, then, with Reliance is that it is easily solved, by simply breaking the online assumption. Yet, if that is done, certificates themselves are now optional. From an engineering viewpoint, this is refreshing, but from a business viewpoint, this is considered a deal breaker. So compromise is made in engineering: be as reliable as online but use offline technology.
The contradiction of need for reliance versus engineered offline assumption is admitted in Online Certificate Status Protocol (OCSP) . The ability of OCSP to check the certificate's status, online and in realtime, breaks the assumption of offline authentication In OCSP, the user contacts the server to check the certificate when she needs to rely upon it, so it can be done on an as-needed basis.
What remains unclear, then, is why continue with use of a certificate at all? Kohnfelder's original claim was that the server would face performance issues, and the creation of the certificate was an explicit promotion of offline assumption.
One reliance pokes up its ugly head and insists on realtime checking, no certificate is required. A signed key without a name, or even a unique index number is sufficient for many purposes.
Presumably it is possible to consider the certificate as a gate-keeping operation, a chargeable event to enter a user into the system, but the actual cryptographic effect of the certificate's signature from a certificate authority now lacks purpose.
Key validation - done properly - is too inefficient to work . Don Davis views the complexity of validation as a "compliance defect," whereby the rules for managing own keys and validating other's keys are so complex, that they are unlikely to be met sufficiently . This criticism was borne out in the infamous Microsoft Internet Explorer bug where the full certificate chain was not being validated.
Ian Grigg suggests that SSL was designed to use PKI based on the wrong threat model .
The "Internet Threat Model" as described by Eric Rescorla is one of the wire being unsafe and the end-nodes being safe . Grigg sees this as the reverse of the reality of the Internet, with miniscule or non-existent reports of threats and losses on the wire, and massive threats and losses on both end-nodes (e.g., phishing, trojans, insider attacks and compromised servers a la Choicepoint).
The "Internet Threat Model" may trace back to military traditions where aggressive radio operations of listening and interfering are routine, thus suggesting the preeminence of the MITM threat . More formally, Peter Williams suggests the model derives from three influences :
Voydock and Kent's influentual 1983 paper on secure protocols :
The model assumes that both ends of the association terminate in secure areas, but that the remainder of the association may be subject to physical attack. A terminal that forms one end of the association may, at different times, be used by various individuals at different authorization levels. The hosts on which the communicating protocol entities reside may provide services to a diverse user community, not all of whose members employ communication security measures. An intruder is represented by a computer under hostile control, situated in the communication path between the ends of the association. Thus all PDUs transmitted on the association must pass through the intruder, The association model is depicted in Figure 3.
SP4, a classified standard from the NSA.
NCSC Red Book, Part II's per-layer-threat analysis.
Perhaps disagreeing with this, Clark and Wilson trace the "Historical Threat Model" back to the US intelligence community where disclosure of confidential material was of preeminent importance, and more specifically the security of nuclear arming modules .
Whatever the historical pedigree of the model, Grigg concludes that the choice of the MITM as the primary threat to guard against on the Internet lacks any foundation. The lack of validity in the threat model is becoming more and more of an issue as browser manufacturers refuse to compromise on their unfounded model of protocol MITM protection, in the face of rising spoofing attacks from phishing, in itself a variant of MITM.
PKI, in the general forms that have been touted, has normally been based on X.509. Unfortunately, X.509 turns out to be more of a glue and string approach than a real solution. There is good reason for this: PKI was invented before it was needed (Diffie and Hellman, 1976, and Kohnfelder 1978) . As it was invented before the Internet, and as no demanding application set its requirements, the design of PKI drifted until picked up by telecoms and OSI committees.
This section is a bottom-up analysis that looks at how the x.509 structure came to be used. See also Ian Grigg's top-down analysis - looking at the client's needs, elsewhere. The two analysies are in accord.
On X.509's capability to support the notions of a PKI, Peter Gutmann states that X.509 was :
Anything can be a PKI (PGP keys hand-carried on floppies are a PKI), what X.509 lacks is a match to any pressing real-world problem. For example when I make an online purchase, I don't need a PKI, I need an online credit/debit authorisation mechanism. PKI just gets in the way (the closest anyone ever got was SET, which wasn't a PKI but an online CC system dressed up to look like a PKI - I'm sure Anne/Lynn Wheeler would have much more to say about this).
Gutmann also states that :
.... [x.509 was] originally designed solely for use for user authentication to the worldwide X.500 directory (something which is very obvious in the structure of an X.509v1 cert), a problem that never eventuated. It is quite literally a solution in search of a problem. The difficulty in applying it to any pressing real-world problem arises directly from its X.500 origins
Peter Gutmann goes on to outline how it can't be changed :
Unfortunately any attempts to fix this by switching to practical, widely-used technology (e.g. dump X.500 DNs as identifiers, use online whitelist checking instead of offline blacklists, move them around using HTTP instead of X.500/LDAP, etc etc) so you can actually do something useful with the things, is met with extraordinary resistance by the people writing the standards. As the quote on my home page says: "[PKI designs are based on] digital ancestor- worship of closed-door-generated standards going back to at least the mid 80's. [...] The result seems to be protocols so convoluted and obtuse that vendor implementation is difficult/impossible and costly".
Efforts at reengineering PKI have not succeeded in gaining traction. SPKI is one such effort, and the reasons for it having failed to oust the incumbent are worth studying. Another effort is the OpenPGP web of trust model which likewise has achieved some localised successes (notably in email) but has not done more than irritate the PKI school.
Documenting use cases is an unnecessary distraction from doing actual work. You'll note that our charter does not say "enumerate applications that want to use TLS". anon, written on the TLS group .
In many commerce spaces, PKI does not mirror actual commerce patterns, so cannot help . That is, if you take any existing commerce pattern, and model it (for example, by drawing out a graph of the interrelationships), it looks completely different to the PKI model.
In fact, it is relatively rare to find any pattern of commerce that maps easily to the PKI model. This practically means that there is little chance of it being used, as to switch from one pattern to another is an expensive exercise, and is only done over time, and for great savings in costs or increases in benefits.
Some observers have commented on the apparent nexus with military needs, and the similarity with military models of control. Yet even there the comparison is only superficial; although the military works to a theoretical hierarchical control model, in actuality modern armies strive to push decision making as far down as possible. Specifically, there are many use cases where commands are overridden at a local level, something that could not be emulated in PKI.
This section presents top-down analysis - looking at the client's needs. See also Peter Gutmann's bottom-up analysis. that looks at how the x.509 structure came to be used. The two analysies are in accord.
Trust is one particular aspect of the patterns of commerce stands out as being totally at odds with PKI. Trust is something that business simply does not outsource. PKI's notion of outsourcing trust places an inescapable conflict in front of the managers.
This observation came from the finance markets . There, one would think, that trust derives from ones presence within a regulated system, based on central banks or securities regulators as the ultimate authorities.
Not so! The authorities are only trusted, and transitively trusted, at the level of lip-service. In practice, trust in financial markets derives from a network of personal contacts.
This can be seen when a new player arrives on the scene. She can present all her creds or quals, but will still not enter in, until she has established her personal trustworthiness with the first other player. Then, that other player can authenticate her trustworthiness to others.
In general, there is no single point of trust in the finance markets. Trust is distributed, and is not outsourced. Indeed, it is embedded at the level of individuals, more than it is held at corporate levels.
It is relatively easy to then map this arrangement to other businesses, and see that the general rule applies: trust is not outsourced. The particular trust networks of the finance markets do not so easily map, and trust is often abrogated from the individual level to the corporate level in other businesses.
Applied to PKI, we can see that there is no real business case for an external certificate authority, and if a PKI is needed, it is almost a given that an internal certificate authority is called for - it would depend on how well the internal trust and other commerce patterns map within the company as to whether divisions could share a CA across internal borders.
What then happens when two companies wish to use each other's certs? They will simply exchange root certs, and for the most part still authenticate relationships along local trust lines, not along PKI lines.
What then happens when Trust is "outsourced" and a PKI is used to intermediate this trust? In practice, what has happened is a shifting of the burden pattern, where the user has simply replaced her trust in the end second party with trust in the TTP. Szabo writes that:
Trust, like, taste, is a subjective judgment. Making such judgement requires mental effort. A third party with a good reputation, and that is actually trustworthy, can save its customers from having to do so much research or bear other costs associated with making these judgments. However, entities that claim to be trusted but end up not being trustworthy impose costs not only of a direct nature, when they breach the trust, but increase the general cost of trying to choose between trustworthy and treacherous trusted third parties .
Are the costs of choosing between good and bad TTPs commensurate with the benefits of not having to choose over second parties? That's an open question which Szabo does not address. However, it is important to point out that even if it were addressed, for many PKIs such as the public Internet ones, the option simply isn't there. See the sections below on One or Many CAs?
Risk, which Lynn Wheeler describes as the inverse of trust, is indeed outsourced and on a massive scale. The entire derivatives, securitization and insurance markets are simply outsourcing of risks. Yet to outsource risk, the relying party has to be able to measure the risk, measure the cost of that risk, and make an economic decision to buy coverage.
Quite the reverse of outsourcing trust! The SPKI theory RFC describes it thusly :
5.7.2 Rivest's Reversal of the CRL Logic Ron Rivest has written a paper [R98] suggesting that the whole validity condition model is flawed because it assumes that the issuer (or some entity to which it delegates this responsibility) decides the conditions under which a certificate is valid. That traditional model is consistent with a military key management model, in which there is some central authority responsible for key release and for determining key validity.
However, in the commercial space, it is the verifier and not the issuer who is taking a risk by accepting a certificate. It should therefore be the verifier who decides what level of assurance he needs before accepting a credential. That verifier needs information from the issuer, and the more recent that information the better, but at the end of the day, the decision remains with the verifier.
Lynn Wheeler suggests that the [W05.6]:
One of the issues in the CRL push model is that it is the relying party which is judging the risk (sort of the inverse of trust), and they alone know the basis of their dynamic risk parameters. One issue that they are aware of is that as the value of the transaction goes up, the risk goes up. Another is that the longer the time interval, the bigger the risk.
The problem then was that since it is the relying party that is taking the risk, and understands their own situation, it should be they that decide the parameters of their risk operation. I.e., as the value of the transaction goes up they may want to reduce risk in other ways, which might include things like trust time windows.
In normal traditional business scenario, the relying party is the one deciding how often they might contact 3rd party trust agencies (i.e. credit bureaus).
PKI/certificate operations have frequently totally inverted standard business trust processes. Instead of the relying party being able to make contractual agreements and make business decisions supporting their risk & trust decisions, the key owner has the contractual agreement with any 3rd party trust operation (i.e., the key owner buys a certificate from the CA).
Rather than being a bug in a validity model to be rectified, this flaw is one of systemic proportions.
As described above, in classical business, a party or user conducts the needed due diligence on the other parties. In the PKI view, the audit of a CA generally terminates at the point of showing that a CA does what the Certificate Practice Statement ("CPS") says. That is, the audit cannot say much about the fitness for a given purpose or user, because it does not know in advance what that purpose or user might be. Hence, the PKI process of necessity requires the user to read the CPS and judge for herself, and in this sense is aligned with the general business process if caveat emptor.
Consider Prof. Kaufmann's reading of the Verisign CPS from 1998 :
The VeriSign CPS defines the procedures Versign will follow before issuing a Digital ID. Individual Digital IDs are currently offered in three classes.(212) Class 1 Digital IDs "are issued to individuals only," and are issued after VeriSign determines that there are no existing entries in VeriSign's database of subscribers with the same name and e-mail address.(213) The CPS notes that these certificates are not suitable for commercial use where proof of identity is required.(214) Class 2 Digital IDs are "currently issued to individuals only" after VeriSign checks not only its own database of subscribers, but also performs an automated check of the applicant's information against "well-recognized consumer databases."(215) The CPS emphasizes that Class 2 certificates, "[A]lthough . . . an advanced automated method of authenticating a certificate applicant's signature," are issued without requiring the applicant's personal appearance before a trusted party such as a notary; therefore, relying parties should take this into account before accepting a Class 2 certificate as identification of the subscriber.(216)
(My emphasis added.) Yet if we look at popular implementations of PKI, we often find the very reverse of this clear requirement to be familiar with the policies of each CA. Consider the policies of popular distributors such as browser manufacturers: the inclusion of a CA has (at least historically) been justified solely on the basis of the audit of the CA, even though the audit process would generally stress that the user herself should not accept an audit on face value. Not only do manufacturers add the CA's root keys without a policy of reading and accepting the CPS themselves, they offer no access to the CPS to users, and even go to some significant extent to hide the CA's name and brand (offering various motives for this such as screen real estate costs and user confusion).
Browser manufacturers especially are potentially exposed to litigation by their negligence, if there is any SSL-based value breaches such as may arise from phishing. Audit practices and CPSs expressely limit their efficacy by stating that relying parties need to judge for themselves whether the results are suitable; browsers actively seek to remove that information, judgement and choice from users, and don't themselves take on the role in any serious sense.
As a further quirk or twist of fate in user protection, the CAs are generally complicit in this reversal of PKI practices and general business. Their practices and statements of same are deliberately crafted to transfer risk to the user, while their marketing promises to reduce risk. CPSs are written to be unintelligible as well as so restrictive of benefits to the relying party as to be practically useless. Professor Wynn continues :
As a risk allocation system, the VeriSign CPS is moving in the opposite direction of most other electronic commerce systems, and resembles the system established by credit card issuers prior to federal consumer regulations protection.(226) No significant pooling of risks exists for consumer subscribers because, although insurance is now offered, the insurance mimics the standard of care the subscriber is required to maintain by the CPS and, thus, is unlikely to offer any relief to a consumer who cannot prove how a copy of his or her private key was obtained. The CPS allocates fraud or error losses to the consumer who is likely to be much less sophisticated than VeriSign, and is completely incapable of deploying the kind of technology used by credit card companies to reduce fraud.(227) The problem of information asymmetry is acute in consumer dealings with VeriSign because no plain language disclosure of the risk allocation system exists outside the CPS, which is over 100 pages of single spaced text written in dense legal prose.
Browser manufacturers might argue that their policies are justified on this basis, but this then exposes them to anti-trust considerations - why are they requiring audits of CAs if they know the CPS to be worthless or neutered practice statements?
The above assumes that commerce is the context, as does the PKI industry. It is pretty much accepted that the purpose of PKI is ecommerce, and issues like privacy or trust are not addressed except where they help commerce, or, more cynically, where they relate to sales of certificates and PKIs.
PKI has it backwards. Commerce is simply an example of interaction, and the patterns of behaviour applied to commerce are taken from general patterns of behaviour. Specifically, where trust is distributed, it is delivered transitively, from person to person. That is, Alice gives Bob some information on Carol that allows Bob to trust Carol, to an extent derived from his trust in Alice.
This relates to Mark Granovetter's theory on the strength of weak ties . The theory predicts that many weak ties are more efficacious than few strong ones. Casting this across the assumptions in PKI, we can see that PKI attempts to create strong ties from CAs to user, and the emphasis on CAs being totally trusted results necessarily in there being few of them. In contrast, competing approaches such as OpenPGP's web of trust lend themselves more to many weak ties, where each tie is often an interaction of signing between casual strangers. Granovetter's theory thus predicts that web of trust is stronger than PKI.
``You have asked me if I knew the name of the assassin. I do. The mere knowing of his name is a small thing, however, compared with the power of laying our hands upon him.'' Sir Arthur Conan Doyle, ``A Study in Scarlet'', Holmes speaking .
One assumption that continues to confound questions of value is that all problems are solved if you can establish the identity of the counterparty. This is not, and has never been the case for most applications.
For payments, especially, identity is irrelevent and what is instead required is a statement concerning the value presented. That is, what is the colour of your counterparty's money? This attention to value not identity is a core result of the psuedonymous designs of SOX and x9.59  [W9.59]  . Yet, PKI as embodied in x.509 insists on identity in the form of a telephone directory-like name as being the core claim for issuance of certificates. Carl Ellison identifies the genesis of the name problem :
...Diffie and Hellman postulated that the key management problem is solved, given public key technology, by the publication of a modified telephone directory, which they called the Public File. Instead of a name, address and phone number, the Public File would contain a name, address and public key. ... As a demonstration of the power of public key cryptography, this was a brilliant example. The problem is that there are people who took this example literally and set about creating such a directory... Diffie and Hellman solved the previously difficult key management problem by use of names, but did not offer any solution to the even more difficult name management problem.
Ellison then traced the history through Kohnfelder and to X.500, showing how each successive implementation shifted the burden of the name problem to some unknown place. The lack of acceptance of the name problem as a difficult and probably intractable problem in computer science leads many applications and users down blind alleys. Attempts to map the notion of identity to servers, browsers, accounts, and protocol end-points are frequent and deppressing sources of failure.
PKI's experiences with Identity may have been a cautionary tale, but not cautionary enough for some: Microsoft's Passport and .NET systems grasped at Identity in a big way. Liberty Alliance set up a counter proposal, to block yet another Microsoft plan to conquer the world, but they also went heavily into the Identity domain. Now, as Stefan Brands reports, these systems are failing to draw support . Why? Nobody has the ability to be able to sit down and design a complete Identity system and expect people to accept it. The application drives how Identity is done, not the other way around, and the application was not identified in these proposals.
See also below, The One True Name , a discussion on whether x.509 meets the requirement of mapping Identities, when it is determined that this is required.
Scratch any PKI supporter and they will go to great lengths to support the claim that the Identity expressed in a certificate is of the utmost importance. But this is yet another marketing myth. Lynn Wheeler identifies that when early certificate issuers came to investigate seriously what it meant to issue certificates with Identity, they discovered more than just the obvious implementation issues .
During the early 1990s, it was realised that the authoritative agencies were not certifying one true identity, nor were they issuing certificates representing such one true identity. This was in part because there were some liability issues if somebody depended on the information and it turned out to be wrong.
Original design discussions in the early 90s by and of independent 3rd party trust organizations centered around claims that they would check with the official bodies as to the validity of the information. They would then certify, so the model suggested, that they had done that checking, and issue a public key certificate indicating that they had done such checking. Even at that point, the independent 3rd party trust organizations were not actually certifying the validity of the information; they were more simply certifying that they had checked with somebody else regarding the validity of the information.
The original business model of these independent 3rd party trust organizations was that they wanted to make money off of certifying that they had checked with the real organizations as to the validity of the information, and the way they were going to make this money was by selling public key digital certificates indicating that they had done such checking.
The issue that then came up was what sort of information would be of value to relying parties, and consequently should be checked and included in a digital certificate as having been checked. It started to appear that the more personal information that was included, the more value it would be to relying parties. Not just ones name, but address, age, marital status, and many other characteristics such as ancestory were mooted. Indeed, the very type of detail that relying parties might get if they did a real-time check with a credit agency.
Another of the characteristics of the public key side of these digital certificates was that they could be freely distributed and published all over the world. But by the mid-90s, institutions were starting to realize that such public key digital certificates, if freely published and distributed all over the world with enormous amounts of personal information, represented significant privacy and liability issues . It was also considered that if there were such enormous amounts of personal information, the certificate was no longer being used for just authenticating the person, but was, in fact, identifying the person (which is another way of viewing the significant privacy and liability issues).
In response to this uncertainty, some institutions started issuing relying-party-only certificates which contained just a public key and some sort of database or account lookup value, which latter directed to where all the real information of interest to the institution was kept . The public key technology in the form of digital signature verification, would be used to authenticate the entity identified in the certificate and the account lookup would establish association with all the necessary real-time information of interest to the institution. This had the beneficial side-effect of reverting public key operations to purely authentication operations, as opposed to straying into the horrible privacy and liability issues related to constantly identifying the entity.However, it then became trivial to prove that relying-party-only certificates are redundant and superfluous. With all the real-time information of interest for the institution on file (including the public key) and the entity digitally signing some sort of transaction which already included the database/account lookup value, there was no useful additional information represented by the relying-party-only certificate that the relying party did not already have! By definition, the public key was registered with the relying party as prelude to issuing any digital certificate, but if the public key had to already be registered, then the issuing of the digital certificate became redundant and superfluous. QED.
This would be useful if there was some utility that had survived the above aggressive process. It seems that the end result of all such forces is that the certificate is to deliver the name of the individual concerned, and little more.
If one looks at the role of certificates -- some sense of being able to rely on them for some important purpose -- there is an expectation of being able to seek remedy if things go wrong. However this is suspect, and indeed is less likely the more distance is involved. Especially, as each jurisdictional boundary is crossed, remedies become more expensive and less likely. However, not only legal remedies, but also social remedies become less efficacious.
As society moves more onto the net, the need for long distance remedies to issues and disputes becomes higher. However, the Name of an Individual is constrained within the network of jurisdictional limitations, it is therefore positively correlated with the weaknesses of old-world system; the utility of a name becomes weaker as distances increase.
Hence, although certificates promise some form of long distance remedy, they do not deliver this promise. Just as names are more useful in local commerce, the names in certificates will only help in local disputes, precisely the place they are not needed.
The now legendary Zooko's Triangle asserts that :
"... you cannot have a namespace which is all three of: 1. decentralized (which is the same as saying that the namespace spans trust boundaries), 2. secure in the sense that an attacker cannot cause name lookups to return incorrect values that violate some universal policy of name ownership, and 3. using human-memorizable keys."
Bryce 'Zooko' Wilcox asserts that a namespace can only be two of the above three (you pick) and challenges the reader to prove him wrong. However, we do still have the option of combining two namespaces in order to gain all three legs of the triangle. The PKI may or may not achieve two legs, but it may be getting confused in trying to do all three in one namespace (see next section).
Stefan Brands explains further in his Identity Primer (part IV) and claims that more and more systems of identifiers that are originally secure within a single context are now being pushed out to cross-domain contexts . He says:
"In sum, the automation of communication and transaction systems is causing increasing numbers of relying parties to rely on the same certified electronic user identifiers. Users lose all the security and privacy benefits of self-generated user identifiers, while relying parties are increasingly vulnerable to attacks that originate from other relying parties and from certification authorities."
"A commercial PKI protects you from anyone whose money it refuses to take." Matt Blaze .
Nowhere do the contradictions in PKI surface so clearly as in the Certificate Authority business. Unfortunately, that clarity comes at a cost: knowledge. Most think a CA ensures that the certificates are correct, whatever that means, and look no further. To look further is not easy, because CAs have made it their business to not be scrutible.
The single certificate authority ("CA") might be claimed to be a "solved problem" by some, but any workable system requires more than on CA, for different purposes (risks), places and peoples.
Many CAs implies the need for an additional network of trust to join the CAs together. This is not considered a solved problem by any observers :
"Their widespread use would depend on a complex hierarchical infrastructure of mutual recognition of different certificate issuers' certificates...
Communications-Electronics Security Group ("CESG"), an agency of the government of the United Kingdom, has it differently :
"Each subscriber is registered with a particular TTP. A specific structure to the interconnectivity of the TTPs is not assumed: they may form, for example, a hierarchy, set of hierarchies or a flat structure."
Why can we simply leave out any assumption of interconnection?
CESG's answer is that each subscriber is registered with a particular TTP, therefore there is a clear written promise made to the subscriber and it is as clearly limited to the agreement between those two parties. The other TTPs simply do not enter into the equation.
One issue clearly outlined in the Mozilla project to re-evaluate their community of CAs is the lack of diversity. Currently, all CAs are cast in the original mold of identify-and-pay. Yet much of the world lacks easy means of identification, and the price of many of the offerings is out of reach of for example the developing world. Much of the Internet's application space gains no benefit from such an approach.
Indeed, this inability to handle multiple CAs is highlighted as a security bug in the x.509/SSL/browsing PKI. In that PKI, all certs from any CA are treated with equivalence . According to x.509/SSL/PKI dogma, the user of any application is not expected to enter into decisions based on who signed the cert, all are as good as each other. This is a fatal security flaw to SSL's PKI; it permits unprotected MITMs within the boundary of SSL and the core security implementation.
Drawing from CESG, the PKI for secure browsing institutionalised two flaws directly warned against in the PKI literature :
Users are not subscribers and do not enter into agreement with any TTP, yet promises are made to them;
The x.509 public PKI assumes that all CAs are equal, where the only strength to this claim is a loose one that manufacturers only distribute roots which accord to this assumption.
Such assumptions of equality and absence of agreement fly in the face of any extant practice and experience. (It is quite hard to think of an analogue of such blind faith in the real world.) When any security decision is made, the brand of the salient parties is always present, and its presence is a key part of the user decision. If the brand isn't present, it raises the question of whether the brand, and thus the identity of the signer, was important or not.
Consider the contractual view. A party that purchases a certificate from his CA enters into an agreement with that CA. We can assume that this implies that the CA will not supply a certificate to any other party in his name, thus giving the certificate purchaser a monopoly on his name within the namespace of the CA.
Yet that is the limit of the contractual protections provided here - that the CA will not issue another cert in that name. This structure does not for example limit another CA from issuing a certificate in this name, as neither the hapless purchaser nor the CA have any contractual obligations with any other CA. (Nor does it prevent our party going to another CA on his own behest and purchasing another certificate!)
In more technical terms, this bug exposes such PKIs to a large class of substitute certificate attacks. A cert acquired from one CA can be used to conduct an MITM attack on the victim, with neither the victim nor the copied CA being aware of the attack . In current day browsing, the S/MIME system and code signing, there is no defence against this attack and indeed it is hidden and denied as an attack.
The only defence proffered for this is the control of the root list, which amounts to shifting the burden to some hand-waving process for making sure all CAs are good; a defence that is as useless as it is revealing in its ignorance of the real world, not to mention agency and contract theory. It is somewhat unclear why this state of affairs exists, but the SSL community seems strongly in support of the principle of "all CAs are equal."
Consider the recent efforts by the Mozilla Foundation, manufacturer and publisher of the newly popular browser, FireFox, to address and revise the Root list. Their policy has succeeded in exposing fault lines of the "all CAs are equal" simply by asking how one tells. By initial programmer consensus, the view included at least the notion of a WebTrust audit. But this is clearly inadequate to all purposes as such a thing is very expensive. It is not available across a wide range of countries and other circumstances, and thus by definition it is too expensive for some purposes and too cheap for others.
Further, WebTrust itself may be little more than additional smoke and mirrors. Its stated mission is to verify that the CA is doing what it says it is doing. This is obviously circular, and does not support the widespread public belief that a WebTrust in some sense "trustworthy." By way of example, an ISP could create its own CA for the purpose of MITMing all of its clients, state that in its terms and conditions, and get a WebTrust audit to confirm it!
These revelations have raised the notion that a WebTrust may not only be insufficient for induction into the root list, but also potentially optional. This presents every browser manufacturer with a quandary of reliance; if the WebTrust cannot be relied upon to ensure the fitness of the CA according to the browser's metrics, then the browser must take back responsibility.
An emerging criticism of the PKI model as applied to the world of secure browsing is that it assumes a single risk model. In classical purchasing, a consumer analyses the strengths and weaknesses of the good, and buys according to the fit to her own needs and risks. That is, she must have the strengths and weaknesses available to her in order to compare to her needs. She has her risk model, and it is different for every consumer, albeit with commonalities.
Secure browsing stresses one security model and thus one consumer risk model. It attempts to broaden those models across the entire environment. This means secure browsing assumes one consumer, one purpose, and one risk model. Clearly this won't work for all purposes, and the attempt to make it work for one purpose as well as many purposes has resulted in a lowest common denominator effect.
A core fallacy is that the designers of the SSL secure browsing system assumed they knew what the risk model of the user was . If this criticism holds, it would limit the applicability of PKI to corporate or departmental purposes, where a conscious agent acts on behalf of all users. That is, the company or department chooses the security good on behalf of all users, and accepts the rough edges.
In secure browsing, this assumption hangs on the statement that browsing is secured in order to protect credit cards. As long as that statement has merit, then an assumption of one risk model has some standing. Unfortunately, the assumption is itself challenged . Online banking is one area where that which worked for credit cards is shown not to work for another purpose; phishing primarily targets online banks.
In email systems such as S/MIME and emerging instant messaging systems, there lacks even the claimed statement of commonality of purpose on which to base the assumption of shared risk model on. That is, there is no analogue to credit card protection in these systems. Thus, the one risk model of PKI is not appropriate for person to person communications systems.
The implementation of any PKI system on a wide scale is subject to the capture or influence by special interests. Commercial interests that place revenue in advance of a workable model were discussed above.
Attempts have been made and have at times succeeded in perverting a system to Government policy interests. Such goals include citizen's identity cards, key escrow, and centralised control of user data such as health data. Britain especially has been morally lax in attempting to employ her citizens' Internet as a method to track and trace them :
"And the suspicion inevitably remains that [British] Government's continuing enthusiasm for these castles in the air derives mainly from its hope that from among them may emerge (free from cost or blame to Government) a citizen's identity card. Convenient for Government as such a development would be (because Government typically needs to assign a unique identifier to each citizen to avoid multiple claims for social security benefits or tax reliefs, for example), Government's wish to portray the solution to its own problems as being promoted for the benefit of electronic commerce as a whole continues to be profoundly counter-productive."
If the agenda includes using PKI, the CAs and certificates as means of projecting a control policy, the following signs may be evident:
Legislation that makes it a crime to 'interfere' with a signing/encryption device (even your own),
The device is not 'owned' by the user, but rather the supplier.
Pressure to implement 'qualified certificates' in exchange for favourable legal presumptions, an inducement of dubious value when accompanied by mandatory compliance regimes, the burden of potential personal criminality, and constant policy surveillance, etc.
Reserving the market for Certificate Authorities and digital signatures to a restricted and/or controlled sector such as banks or notaries.
Providing two security environments: a weak one for one group and a strong one for another group, or distinct regimes for encryption and for signatures.
One clear case of capture by special interests is in the so-called digital signature laws that were widely pushed in the mid 1990s. Such early legislation pushed the role of the TTP, and of the state as regulator of same. This was a dramatic shift as before, the primary roles of states and TTPs were limited to dispute resolution . Legislation was well in advance of any useful practice, and was thus candidate for disaster .
In the civil law tradition, the notary has been empowered to oversee important contractual meetings. A notary makes a contract legal and enforceable, in short, and the PKI notion of the digital signature on a contract has run headlong into this barrier.
Notaries in civil law countries have practices as valuable and as entrenched as law practices in common law countries. Indeed, in some places the practice is treated as a franchise, handed down from father to son (the country of Andorra recently passed legislation to open up its duopoly to competition). The result of this clash may have been lobbying to neuter the digital signature technology by means of aggressive and restrictive laws and standards.
This situation has not been found in common law as contracts there are between the two parties and only under rarer circumstances are third parties expected to participate.
One influence that is less widely understood is the influence of the military agenda on the PKI.
There are two potential effects. A first factor is that the military remains a large technically sophisticated player, as well as being well funded. If hypothetically the US military were to invest in a major worldwide PKI, the size of this installation would effect the remaining sectors. In this sense, agenda alignment would be a serious force.
Secondly, and perhaps an example of this agenda pull, Grigg suggests that the choice of the MITM as the threat of choice was motivated by the military's more acute awareness of that threat . In the 1980s and early 1990s, the Internet did not strongly motivate cryptography and indeed it remained suppressed due to the US government's fears that use of cryptography for the net was more a weapon of offense against it than a defence for securing cyberspace.
In that environment, interests in cryptography were stongly influenced by the preponderance of text books, experience and contracts deriving from the military. By way of example, the threat model and the hierarchical structure bear resemblance at least superficially :
That traditional model is consistent with a military key management model, in which there is some central authority responsible for key release and for determining key validity.
This section is speculative; it should be considered to be an invitation or signpost for future developments.
Szabo points out that:
Large numbers of articulate professionals make their living using the skills necessary in TTP organizations. For example, the legions of auditors and lawyers who create and operate traditional control structures and legal protections. They naturally favor security models that assume they must step in and implement the real security. In new areas like e-commerce they favor new business models based on TTPs (e.g. Application Service Providers) rather than taking the time to learn new practices that may threaten their old skills .
Carl Ellison and Bruce Schneier make the point that all PKI discussion seem to reduce to an attempt to anthromorphise the machine, "it is your machine signing for another machine"  .
But, we have no useful model whereby we can relate humans to the hardware. Thus, the Anthromorphism is not realisable in practice.
Marchesini, Smith, Zhao explore the distance from humans to keys more :
In theory, PKI can provide a flexible and strong way to authenticate users in distributed information systems. In practice, much is being invested in realizing this vision via client-side SSL and various client keystores. However, whether this works depends on whether what the machines do with the private keys matches what the humans think they do: whether a server operator can conclude from an SSL request authenticated with a users private key that the user was aware of and approved that request. Exploring this vision, we demonstrate via a series of experiments that this assumption does not hold with standard desktop tools, even if the browser user does all the right things. A fundamental rethinking of the trust, usage, and storage model might result in more effective tools for achieving the PKI vision.
One of the great fallacies in cryptography and ecommerce was that the signature named from cryptography had any relationship to the signature in law and human dealings. In practice, the only thing in common was the name [W05.07]:
Another characteristic of the PKI x.509 identity certificate activity .... was trying to cause a great deal of confusion about the difference between digital signatures and human signatures. Some of this possibly was semantic confusion because both of the terms - digital signature and human signature - contain the word signature.
In law, a signature is no more than a mark indicating intent by a human. A contract is formed through a whole host of other issues, far too detailed and broad to list here, but it is perhaps sufficient to summarise that while strong contracts can happily be formed without human signatures, they do in general require intent.
Digital signatures on the other hand are created by keys. Just the mere fact that a user might have many keys challenges the notion of the contract forming digital signature. What does it mean when a person signs with different key? What does it mean when two things of different meaning and context are signed with the same key?
PKI does not say. Nor do many of the technical offerings that have been fielded make it clear. This leads to the difficult situation that one can not plausibly state what the difference is between a digital signature made on random data in a key exchange - a technical element of a protocol - and the digital signature made on a document of contractually important text.
This scenario is not unknown but certainly unaddressed. S/MIME for example uses digital signatures and presents them as tokens of user identification. Yet they are also a key part of the technical protocol as the digital signature is used to show that the message has not been interfered with.
OpenPGP to some extent has addressed this by its move to different keys for signing and for encryption.
Certificates were meant to state what a signature meant. In a critical assessment of the meaning of the signature, Lynn Wheeler points out that tying the meaning of the signature to the certificate is fraught with difficulties unless there is a way to limit any given signature to a given certificate :
Supposedly the "non-repudiation" bit was capable of turning any digital signature operation (regardless of the environment in which the signature had been performed) magically into a human signature carrying the attributes of read, understood, agree, approve, and/or authorized.
If a merchant could demonstrate any valid digital certificate with the "non-repudiation bit" turned on (for the customer's public key), then the burden of proof in any dispute would have shifted from the merchant to the consumer.
[But] the PKI-oriented protocols provided no mechanism for proving which certificate had been originally attached to the transaction.
In practice, a given key can be placed into any number of certificates. Again in practice, certificates are commonly public records. Further, as most protocols discover that tying the certificate of the moment to the records of evidence results in impractical data bloat, there is often nothing in the protocol that identifies the certificate. This leads to the possibility of a party to a transaction presenting one certificate and then substituting another if there is any meaning in the certificate.
It is a hard problem to tie meaning to a digital signature. One of the few successful efforts is the Ricardian Contract .
Grigg and Howland set themselves the goal of making a contract in digital form that could stand up for itself in court. That is, the contract had to be easily interpreble by judge, jury and counsel, all of whom had to be assumed to be non-computer-literate. In this protocol, a full textual and readable document of significant length suggests the contract on offer. That document includes sufficient information to identify the party offering the contract. The certificates to be used are included, in obvious sections, albeit in armoured form, and have their roles (such as [contract] encoded in their certificate Id). The signature itself is appended in what is termed cleartext signing mode.
The Ricardian Contract is notable for its size and the extreme lengths that the designers went to in order to make the signature and contract worthwhile; it has achieved its set goals with two appearances in courts. Perversely, Grigg suggests that the signature delivered by the public key signing technology does not in effect achieve any more than window dressing; by far the greatest effect in both technical terms and contract or evidentiary terms comes from the simple message digest that links that document to the transactions formed under that document.
Although cumbersome and possibly only suitable within the Ricardian model of few widely fungible contracts, such steps are essential if confidence in the result is to be maintained. They appear to be missing in other protocols.
Given the strength of these difficulties, we must consider then that whatever and wherever digital signatures are employed, they probably have only a vague sense of meaning that speaks to contracts. Thus, a priori, fielded systems employing digital signatures are likely unreliable for legal purposes at least until and unless these issues are addressed. Digital signatures may then be best limited to uses within technical protocols (checksums that cannot be interfered with, key identification) and should only be cautiously considered as having ramifications to contracts and value.
Once we have accepted that Identity is a requirement (see above, Identity is not the Application ), the difficulties of implementation remain. The concept of PKI relies on cert ==> Name ==> Individual which is flaw-ridden at each step :
"All-purpose digital name certificates are of very doubtful utility, among other reasons because names do not adequately distinguish people in large populations."
Not only are they fraught in large populations, they are not popular in small populations :
The original model of an ID certificate was one that would bind me to my entry in the X.500 directory, by way of the DN that both identified me and uniquely specified my entry in the directory. The assumption was that one needed only one such entry (or perhaps two: one at work and one at home). By contrast, each of us has multiple identities both at home and at work. I, for example, have five different but equally valid IDs at work.
Why is this? The view from financial cryptography indicates that it is the asset that is of importance, not the name: :
"[Names] are also irrelevant to many transactions (what the merchant needs to know is that a card number is given by the person authorised to give it, whatever their name may be), where they needlessly reduce legitimate privacy."
One other factor to bear in mind is that the field is confused with silver bullets. Digital signatures are touted as a solution to many things, and because there is no field experience in the veracity of these claims, the story gets bigger each time it is told.
Here are some of the areas where digital signatures are "sure to make the difference:"
Wherein, I justify the aggressive title: PKI considered harmful. To be written...
Wherein, I summarise the chief claims as disputed.
In order to justify the title of this review, I must identify the harms and lay claim to them as costly; theory alone will be insufficient as critical theory will merely beget its companion anti-critical theory. I do not however go further than identification of the harms; it is left as an open topic for research to document and cost these harms.
The harm wrought by PKI is threefold.
In the first instance PKI simply costs money and effort to put in place. This is the least of the damages, as there is only the loss to the implementor (opportunity costs, the economists would say), and indeed for this very reasons PKI escaped the rancour of serious researchers for a long time. We live in a society of capitalism which preaches that all are permitted to spend their own money as they see fit; there is no global guardian that designates the one true way to be secure. In such a society, if a company chooses to spend its hard earnt revenues on schlock, that is its choice and we may do no more than tut, tut in mildness.
It is in the second instance, its failure to deliver on the promise, that changes the above situation. Failure to deliver results in losses from fraud. Phishing is the broadest and best documented attack in recent times, but hacking, viruses and the like have all been impacted by PKI in a positive or negative fashion (e.g., the complexity of PKI-based systems has resulted in weaknesses as measured against other systems that might have been employed more economically).
These losses have often been incredibly difficult to deal with and herein lies the third harm: As PKI fills the security spot in the user public's collective mind, there has been inordinate cost in discussing fixing the security woes . This arises primarily out of an unwillingness of security practitioners to face the enemy within; no professional wants to admit to themselves, let alone the world, that the last decade's work was fatally flawed. No professional wants to be told that his or her work participated in the arisal of billion dollar losses.
We face a large and persistent loss of time and energy in each contemporary discussion of security. Each practitioner has to walk through the tortured path of PKI flaws until they reach the inevitable conclusion. Each will reject at every step of the way, reflecting their training and faith in their mentors and peers, and each will need to be tutored through the flaws and failings individually in order to break through the faith and get back to the science. This deconstruction of the religion of PKI is extremely costly; but if we are to make headway in returning science to security, it is a cost that each of us must bear.
In the meantime, the losses continue. PKI is the ultimate expression in the false sense of security, and no amount of discussion, paper authoring, and other forms of hand wringing can ever cater or redeem the losses that users have incurred. It can only be stated that this is a task of some urgency, even if we as a body scientific have dispatched the responsibility elsewhere.
If PKI is an amphigory, it is one of elegance.
It has persisted since the 1970s, which makes it substantially older than the Internet. It continues to be the default security statement for many organisations and people. Only the deepest analysis and the most persistent of criticisms can unearth the flawed assumptions within, and even when laid bare, the structure still impresses with its apparent solidity.
Indeed, it seems a rite of passage for the serious security researcher to write a paper with a title such as "Improving PKI..." Never in the field of security research has so much been written by so many, to be read by so few.
Why so long? remains an open question; for further research in security; why was such an artiface so persistent in the face of continual lack of field evidence and reasonable doubts cast by notable practitioners as listed in the credits above?
 PKI Page
 Originally, this list derived from a post by Ian Grigg, 04 Dec 2000.
 By way of disclosure, I do exactly that, in my work as auditor of an open CA.
 Carl Ellison, Quotes, http://world.std.com/~cme/html/quotes.html
 Jane Kaufman Winn, Couriers without Luggage.
 Anne & Lynn Wheeler's Assorted Writings include many references to the search for the revenue model.
 ArticSoft, Ten things I wish they warned me about PKI. eBCVG.
 Quoted in Ellison, op cit.
 Gervase Markham, GoDaddy's $1000 "Warranty" blog entry 17th May 2005.
 Adi Shamir Turing Lecture on Cryptology: A Status Report, and also summarised here: blog entry 26th May 2004.
 ArticSoft, op cit.
 US Government Accounting Office, Report on PKI.
 E-government Unit, New Zealand State Services Commission, International and New Zealand PKI experiences across government, S.E.E. PKI Paper 14.
 Whitfield Diffie and Martin E. Hellman, "New Directions in Cryptography," IEEE Transactions on Information Theory, Vol. IT-22, No. 6, November 1996.
 Kohnfelder, Loren M., Towards a Practical Public-key Cryptosystem, Bachelor's Thesis, 1978, MIT.
 This is somewhat of a restatement of Ellison's 4 conventional PKI wisdoms, found in: Carl Ellison, "Improvements on Conventional PKI Wisdom," 1st Annual PKI Research Workshop, 2002.
 To some extent, PKI serves a minor role in protecting passwords from eavesdropping, yet that role is equally served by passive cryptographic techniques such as anonymous diffie-hellman key exchanges (ADH) or challenge-response techniques.
 An evaluation of the engineering side is depressing. Systems that have been built based on PKI - S/MIME for email and HTTPS for browsing - do not appear strong enough to stop a clever attacker from tricking the user. Breaching the authentication in the human-computer interface (HCI) is so powerful that PKI is easily bypassed, at least as far as secure browsing goes.
 Nick Szabo, Trusted Third Parties Are Security Holes, 2001 - 2004
[Trent] In the cryptographic literature, a trusted third party is known as Trent, and is masculine.
 This can be seen in the absence of a good file-transmission method, a characteristic of store-and-forward networks. Instead, we have FTP, SSH (scp(1)), browser uploads, which assume always-online and client-server, and email attachments which can only be described as a clumsy hack.
 ArticSoft, op cit.
 Ron Rivest, et al. See 4 papers in FC99, and response in FC00, as well as panel. Also R98, op cit.
 Ron Rivest, FC99 panel on revocation.
 Nicholas Bohm, Ian Brown and Brian Gladman, "Electronic commerce: who carries the risk of fraud?" Journal of Information, Law and Technology, October 2000. http://www.fipr.org/WhoCarriesRiskOfFraud.htm
 One pretty good explanation is here: OpenValidation.org. RFC2560 Online Certificate Status Protocol - OCSP June 1999.
 Dan Geer made this point to me in a private conversation in 1997.
 Don Davis, "Compliance Defects in Public-Key Cryptography," Proc. 6th Usenix Security Symp 1996. See also the slides on that page.
 Ian Grigg, " WYTM?" (What's your threat model?), Rants on SSL and Security practice. ",
 Eric Rescorla, " 1.2 The Internet Threat Model ," SSL & TLS, Addison Wesley, 2001. ",
 Anecdotal evidence suggest that where authorities bump up against protest elements, the military and para-military parties use MITM and active attacks routinely. This suggests that active and MITM attacks may be employed only when there is no cost of discovery to the attacker (as opposed to risk).
 Peter Williams, private email, 2005.01.04.
 Voydock and Kent, "Security Mechanisms in High-Level Network Protocols," Computing Surveys, Vol. 15, No. 2, June 1983.
 Simson Garfinkel, Security and usability can be made synergistic, Thesis dissertation, Work in progress, 2005 (Implied Title). Peter Williams, private email 2005.01.04. ",
 A presentation on the history of PKI: http://184.108.40.206/search?q=cache:DYPK1sH7nIYJ:devgroup.mephist.ru/
 Peter Gutmann, private email 20 May 2003. 'my approach to PKI work could best be described as "gleeful masochism", sort of like playing golf. That is, I know it's crap, but it's fun playing with the technology, the same attitude take by people who rebuild Trabi's for fun.'
 Peter Gutmann, post 18 May 2003 to cryptography list at metzdowd dot com.
 Peter Gutmann, op cit 18 May 2003
 Peter Gutmann, posted on the metzdown Cryptography list, 31 March 2014
 Ian Grigg.
 Ian Grigg.
 Nick Szabo, op cit.
 Ellison, Frantz, Ylonen, ... SPKI Theory, RFC2693
"Can We Eliminate Revocation Lists?",
Proceedings of Financial Cryptography 1998,
[W05.6] Lynn Wheeler, Re: More Phishing scams, still no SSL being used... Post, 14th June 2005, Mozilla-crypto.
 Professor Jane Kaufman Winn, Couriers without Luggage.
 JKW, ibid.
 Mark Granovetter, "The Strength of Weak Ties." American Journal of Sociology,, 1973, Vol 78 (May): 1360-1380. PDF of paper and an Overview on Granovetter's Theory. Granovetter's diagram is much loved by the capabilities school.
 Quoted in Ellison, op cit.
 Gary Howland Development of an Open and Flexible Payment System , 1996.
[W9.59] Anne & Lynn Wheeler Writings on X9.59 standard .
 Roger Clarke The Fundamental Inadequacies of Conventional Public Key Infrastructure Proc. Conf. ECIS'2001, Bled, Slovenia, 27-29 June 2001
 RFC2693, op cit.
 Carl Ellison, "Improvements on Conventional PKI Wisdom," 1st Annual PKI Research Workshop, 2002.
 Stefan Brands The Identity Corner
 This section is fundamentally the writings of Lynn Wheeler, albeit heavily paraphrased by the paper editor.
 Note that this was also the era of the EU Data Privacy Directive. That Directive pushed for names be removed from various payment card instruments for doing online electronic fund transactions. If the payment card is purely a "something you have" piece of authentication, then it should be possible to perform a transactions without also requiring identification.
 Lynn Wheeler, Assorted posts.
 Bryce 'Zooko' Wilcox, Names: Decentralized, Secure, Human-Meaningful: Choose Two
 Stefan Brands, A Primer on user identification - Part 4 of 4. This makes more sense if the reader starts at Part 1.
 Bohm, op cit.
 CESG, Cloud Cover Trusted Third Party Protection Profile,
 I was first made aware of this bug by the writings of Peter Gutmann but thought it a curiousity (no reference as yet). Discussions on the mozilla-crypto list have highlighted it, and I now see it as a fatal security flaw. It permits unprotected MITMs within the boundary of SSL and the core security implementation.
 CESG, op cit
 A new law article explores this flaw: Steven B. Roosa and Stephen Schultze, " The "Certificate Authority" Trust Model for SSL: A Defective Foundation for Encrypted Web Traffic and a Legal Quagmire," Intellectual Property & Technology Law Journal, Volume 22, Number 11, November 2010.
 Ian Grigg, " Who are you?", and " Security Breach Disclosure is required for the consumer to adjust risk assessment ", Selected rants on SSL.
 Ian Grigg, " WYTM?" (What's your threat model?), Ibid. ",
 Bohm, op cit
 Nick Szabo, op cit.
 Jane Kaufman Winn, Couriers without Luggage, op cit.
 Ian Grigg, " WYTM?" Ibid. ",
 Ellison, et al, RFC2693, Ibid,
 Nick Szabo, op cit.
 Ellison & Schneier, "Ten Risks of PKI", an article for Computer Security Journal, v 16, n 1, 2000, pp. 1-7.
 Bruce Schneier, "Why Digital Signatures Are Not Signatures", an essay in Cryptogram, 15th November 2000.
 Marchesini, Smith, Zhao, " Keyjacking: The Surprising Insecurity of Client-side SSL" Computers and Security, 2004.
[W05.07] Anne & Lynn Wheeler, Post to Cryptography, 11th July 2005.
 Anne & Lynn Wheeler, Post Ibid. Edited for clarity.
 Ian Grigg " The Ricardian Contract," First IEEE International Workshop on Electronic Contracting, (WEC) 6th July 2004.
 Bohm, Brown and Gladman, "Electronic commerce: who carries the risk of fraud?" op cit.
 Carl Ellison, op cit.
 Bohm, op cit.
 Jane Kaufman Winn, The Emperor's New Clothes: The Shocking Truth about Digital Signatures and Internet Commerce
 Ian Grigg " The Ricardian Contract," Op cit.
 Here, I refer to the programmer and purchasing body public outside the direct world of security research and primary design. As an example of how large this public collective mind is, it includes today all browser manufacturers, all CAs, all banks, and all large security vendors, none of whom in today's world have (taken on) primary responsibility for repairing the current models or designing new ones. Arguably, ten years ago, three companies were not in that user public: Netscape, RSADSI and Verisign.