Wherein, I explain why Usability is the first and the important requirement in security. Put it anywhere else, like number 2, and you risk failure. There is no clash between usability and security, only usable security can deliver security, and only delivered security is security.
The most common failure mode of any security protocol is not being used by users, at all.
There have been thousands of attempts at secure protocols in recent Internet times. Many did not get completed, many were completed but were rejected as too hard to use, and many great protocols missed the boat and were swamped by bad protocols. These are therefore all failures; their delivered security is zero. Zip, zilch, nada.
Perfect security, multiplied by zero users, always equals Zero security. Try it with any variation of zero you like, and any grade of security. Count up as many security projects as you like, and look at the very strong correlation: Security perfectly reaches zero with all known forms of mathematics, if it is has zero users.
Only a delivered protocol that protects and ships packets for actual, warm, live, talkative users can deliver security. A good protocol with some gaping holes will always outperform a perfect protocol that remains undelivered, in security terms. A good protocol in widespread use will generally outperform a better protocol that is poorly used.
Again simple mathematics tells us why: a protocol that is perfect that protects one person perfectly, is still limited to that one person. The mathematics of security says that is a one. If you can reduce your protocol's theoretical security from 100% to 99%, and get ten users, that then means you can reach 9.9, in delivered security to those ten users. Approximately. If you can reduce to 98%, but gain 100 users then your security reaches 98.
Security is as delivered to users, and is summed across them. Therefore, it goes up almost perfectly with the number of users. By far the biggest determinant of security is then the number of users that you can gain. Consider that first and foremost.
Ease of use is the most important determinant to the number of users. Ease of implementation is important, ease of incorporation is also important, and even more important is the ease of use by end-users. This reflects a natural subdivision into several classes of users: implementors, integrators and end-users, each class of which can halt the use of the protocol if they find it ... unusable. As they are laid out serially between you and the marketplace, you have to consider usability to all of them.
The protocol should be designed to be easy to code up, so as to help implementors help integrators help users. It should be designed to be easy to interface to, so as to help integrators help users. It should be designed to be easy to configure, so as to help users get security.
If there are any complex or tricky features, ask yourself whether the benefit is really worth the cost of coder's time. It is not that developers cannot do it, it is simply that they will not do it; nobody has all the time in the world, and a protocol that is twice as long to implement is twice as likely to not get done.
Same for integrators of systems. If the complexity provided by the protocol and the implementation causes X amount of work, and another protocol costs only X/2 then there is a big temptation to switch. Regardless of absolute or theoretical security.
Same for users.
Never doubt that a small group of thoughtful, committed citizens can change the world. Indeed, it is the only thing that ever has.
Margaret Mead
Simplicity is proportional to the inverse of the number of designers. Or is it that complexity is proportional to the square of the number of designers?
Sad but true, if you look at the classic best of breed protocols like SSH and PGP, they delivered their best results when one person designed them. Even SSL was mostly secure to begin with, and it was only the introduction of PKI with its committees, world-scale identity models, digital signature laws, accountants and lawyers that sent it into orbit around Pluto. Committee-designed monsters such as IPSec and DNSSEC aren't even in the running.
Sometimes a protocol can survive a team of two, but we are taking huge risks (remember the biggest failure mode of all is failing to deliver anything). Either compromise with your co-designer quickly or kill him. Your users will thank you for either choice, they do not benefit if you are locked in a deadly embrace over the sublime but pernickety benefits of MAC-then-encrypt over encrypt-then-MAC, or CBC versus Counter-mode, or or or...
.... This is a typical committee effect. Committees are notorious for adding features, options, and additional flexibility to satisfy various factions within the committee. As we all know, this additional complexity and bloat is seriously detrimental to a normal (functional) standard. However, it has a devastating effect on a security standard....
Lesson 1 Cryptographic protocols should not be developed by a committee.
Niels Ferguson and Bruce Schneier [1].
It should be clear by now that committees are totally out of the question as ho-hum is incompatible with security [2]. They are like whirlpools, great spiralling sinks of talent, so paddle as fast as possible in the other direction.
On the other hand, if you are having trouble shrinking your team or agreeing with them, a committee over yonder can be useful as a face saving idea. Point them in that direction of the whirlpool, give them a nudge, and then get back to work.
In contrast to the general practice of the world of deep software engineering and cryptography, you have to think as if you are a user.
Follow Kerckhoffs principles in order 6,5,4,3,2,1. Hang his words on your wall, and tattoo his 6th Principle on your forehead. Indeed, #4.2 above is simply the logic behind the famous 6th. He says it much better than I can, but he said it in French.
Usability should not be sacrificed. In practice, out in the field, the sacrifice of usability will cause a user to shift to a more insecure means. Holistically, this means you have failed to secure the user's traffic. Ideally the protocol should make it easier to use in secure form than any alternate, because convenience is the most important direct metric.
Always use totally unrestricted components. It may not matter to you, but many users have real problems when it comes to accessing non-free algorithms. If you include a non-free algorithm, not only are you annoying the cryptogreenies, but you are also asking anyone who distributes the algorithm to get a licence or permission or sanction. Even if you are being contracted for a company that has no view on this, or likes the idea of playing with the big boys and buying licences and so forth, they also will eventually get sick of the constant drain of contract negotiations, licence restrictions and all the other hooks that an algorithm supplier sticks in.
As your end-users need to get a distro from somewhere, and as the crypto protocol is probably a part built into an application, most users will be facing two (2) distro layers before they get back to you, if not more. Which means you are probably asking three (3) different groups to walk the gauntlet in order to use the protocol. The mathematics are against it, even before we get into the economics.
[1] Niels Ferguson and Bruce Schneier, "A Cryptographic Evaluation of IPsec," 2003.
[2] Maarten van Emden cites Fred Brooks:
In one of the most cited papers in all of computing, F.P. Brooks [9] contrasts programming systems or languages that have fanatical adherents to a bunch of ho-hum items that, though perhaps useful, do not. He notes that the former were created by individuals, the latter by committees. By implication he suggests that committee-designed artefacts are necessarily in the ho-hum category. Brooks identifies the distinguishing criterion as conceptual integrity. Brooks places Algol 60 in the latter category, because committee-designed things supposedly necessarily lack conceptual integrity."How recursion got into programming: a comedy of errors," 2003.
Introduction
H1
H2
H3
H4
H5
H6
H7