|
Abstract: What is security?
Collected random thoughts and models.
Note that the bulk of the content was moved into The Market for Silver Bullets at revision 1.17.
In a recent thread, Adam Shostack asks the question, what are good signals in the market for security [AS1] [AS2] [IG1]? In addressing this question we find ourselves drawn to the question of what is security?
|
The more successful your security investments, the less visible and less measurable your results [AS].
This post provides an interpretation of what we are observing in security goods within OODA. http://www.d-n-i.net/fcs/comments/c536.htm
Observation has initially failed to reward observers, so alternate strategies are formed within Orientation. As there is insufficient feedback in the loop, the Orientation becomes more and more independant, until it is no longer capable of dealing with Observations. That is, those Observations that are in accord with the Orientation are accepted and trumpeted, and those against are downgraded and discarded. (Those that are ambiguous may be misinterpreted!)
Response. Such a dilemma defies an improving strategy. Once an Orientation has settled and reached equilibrium, it is self-preserving. In breaking herding, we sought to reduce the costs that held the equilibrium. But no such relationship is evident in the above inside-ones-own-Loop.
One night's Random scribblings
Define two security scenarios.
First, what we will call the Barings event. This is a totally destructive attack of that happens one time only. That is, the event happens once (n == 1) in the lifetime of the system and the cost (C) exceeds the value of the system (V).
n = 1
C = V
p =~ 0.0
X = x * V
Probability p is low, somewhere around the noise level.
Is this right? For real Barings, need some small n, and need to show that p*c*n has a high variance around a mean that takes it into areas greater than V. But maybe that's just the point...
(Take X to be some factor of cost per attack, which would be greater than one for higher support costs, and less than one for better recovery rates.)
Now consider what we will call the Visa event. This is a small attack that destroys approximately the value of the transaction. There are many transactions, too many to count. Each loss is offset by a small tax placed on every non-loss transaction, equal to the aggregate loss divided by the population of safe transactions. This aggregate is calculated as the per event loss (c) plus some sunk cost covering the investment required to deal with that particular threat (S).
n = high
c = 1
S = high
p = 0.01
X =~ 1
V = n * c / p
(From here on in, we can assume Visa's X to be one.)
Assuming a probability p == 1%, quick calculations reveal that the tax on the successful transaction is approximately 1% (it is slightly more than 1% to cover S and other issues). We can increase n arbitrarily, but because p is spread over a large population, it is somewhat sticky at its equilibrium value.
The first observation is on stability. The second system, model Visa, is stable. There is no avenue (in the model at least) that permits it to fail. In contrast, the Barings system is not stable. Depending on the probability and the march of time, we are looking at a future extending out to some indeterminate point, when the value collapses. A singularity, in other words.
These two are the same observation?
Secondly, model Visa is more or less extensible in all directions. We can increase its paramaters, or we can drop them, and no real change will occur to the model. Certainly the beneficiaries or victims of the model will notice a change (prices go up or down), but that change will be at the margin.
In contrast, change one element by one bit in model Barings and dramatic changes are observed. If n is increased to two, what happens? Does the cost function go up or down? If n is decreased to zero, where are we? Unlike model Visa we are in scary territory; touch one number and it gets scarier.
The third observation says why it is so scary. In model Visa we have huge bases of data to work from. For n large numbers of transactions we have n * p still large numbers of attacks. But, we can precisely calculate the tax t in order to cover those loses. The more the numbers grow, the more our stability.
Yet in model Barings we have a situation of zero events, with one yet to come with which we have zero information. Or, we have one event, for which we have availability of masses of information, but to no avail because the company is bankrupt! Or, somewhat more likely, we have a small and increasing number of events leading to greater amounts of information, yet each brings us close to that terminal event where c * n > V.
The fourth observation concentrates on the edge case. Consider p == 0 such that no event can happen. Hence n == 0. Yet the equation still delivers cost S. That is, if we protect against the threat, and the threat never happens, we are incurring costs with no benefit.
Even if we look at n == 1 and the event that costs us the company, we are facing a situation where the cost is so high that a rational calculation of S is difficult. It may be that when n == 1 and this is the terminal event, the company is better off not investing S into the defence of that attack as the payoff is too low.
In other words, there are some events which should be considered acceptable to not invest against, even when there is a non-zero probability of them incurring. Let the stockmarket do that part.
The trick is to show that in numbers.
If we have n == 0 then we have no information about our attacks. Do they exist? Do they not? Are we safe? This is perhaps only answerable by deduction.
If n == 1 we can at least estimate our security, but the result is hardly useful. Is our situation indistinguishable from n == 0, and are we just a a Barings event away from total loss? We still do not have the answer to that question.
Yet, if n == 2 we have more information. If we survived the second event, we are better off than beforehand, that is, when n == 1. As n grows, more and more information allows us to better tax and better invest to protect. So it seems that unless one is a gambler, the stable scenario is towards model Visa and model Barings has little to offer for itself.
The challenge then is to get to grips with the observation that to be secure, we have to permit the very event we are securing against to happen. Here's how Max Levchin, founder of Paypal, put it [ML]:
The advantage that PayPal has [over a competitor] is an enormous (at this point) amount of data: good transactions, bad transactions, and variables to describe those, and the only way you can get that data (which has to be specific to your particular type of transactions, etc) is by letting bad transactions go through your system and learning from that. And that hurts!
Certainly as far as this model is concerned, information increases with n. And, except for the uncertainties surrounding the very small numbers, it is clear that as n increases, costs reduce and information reduces.
Huh. That's not right. What changes for the small numbers is the variance of the cost?
Theory. To increase security, increase n and decrease size (s???) at least to the point where the c * p * n is statistically stable and less than V
An alternate result from the above insight would be to predict that goods that bury or hide threats would perform poorly, whereas goods that surface and amplify threats would perform well. E.g., according to this prediction, firewalls would perform badly as they block threats and hide the results in logs that never get read. Honeypots might perform well as they permit threats to grow and develop, and presumably encourage owners to monitor and learn from the behaviour of the entrapped attackers [x2].
Insight..... how often is a security product breached?
NOT OFTEN ENOUGH... elsewise they become routine and dealt with on a regular basis - and become built into the risk models and measurable and a nuisance factor...
So the game is to increase their frequency?
Another night's Random scribblings
Consider two goods. One has a known security model, and a known attack model. It is risk-based, and thus falls to certain high-cost attacks, yet defeats most low-cost attacks. Let's call this one R for risk(y),
The second good lacks any known weaknesses, and apparently defeats all known attacks. We'll call this one P for perfect. Initially, let's assume that we don't have anything else to go on. That is, there is no simple test that can be conducted to differentiate the products further (we'll change this later on).
If P is significantly cheaper than R then we would probably go for it. Alternatively, if R was significantly cheaper than P, (and assuming the risk model presented was acceptable) we might happily stick with R. These choices seems to represent fair decisions on the basis of uncertainty, and some measure of the term "significantly cheaper."
If the purchase price of the two is the same, then the rational buyer faces a non-trivial decision. R has some weaknesses that are costly, but on the other hand it purports to have presented a fair and open security analysis. P claims no weaknesses, but this leaves open the possibility that the security analysis is perhaps weak.
Given our uncertainty, we have a probability of making the wrong decision. In the simplest terms, picking the cheapest makes sense. That is, if our intiution is that either good is equivalent, regardless of the claims, then we go for the cheapest.
The question then arises whether we can optomise our choice given the two alternates.
How much of an uncertainty does a perfect record present? And how much more of a certainty does an honest listing of flaws present? What does it mean to compare a good with no apparent weaknesses ... with a good with listed weaknesses?
A good with known weaknesses is relatively straightforward (assuming independence of weaknesses). Let's call E the sum of expected future costs from all events:
n E = ∑ ei pi i = 1
Where e is event, p is probability. and n is the number of events.
A good with no known weaknesses will set all p to zero, yet it also brings forth the possibility that the analysis is flawed. Is the salesman lying? Has he been given false information? Is his understanding of buyer's needs weak? Let's call U the sum of unexpected costs from all weaknesses:
n U = ∑ ei (pl + p'i) pi i = 1
Where the claimed probability pl is zero by definition, but p' is the probability that this is a lie, and the real probability of the event being as before p.
If information is presented, the buyer can make a rational decision based on that information. But if no information is presented buyer cannot decide (rationally). A perfect security good presents information (it is perfect) that is indistinguishable from there being no information available, raising the possibility of a high p'.
A risky good involves expenses at a later time (expected costs E incurred during usage). A perfect security good in contrast is expensive because it involves an unknown risk (unexpected costs U incurred during usage), and it involves an additional expense up front in doing the self-analysis S needed to determine its weaknesses (that is, testing its perfection and limiting the doubt) which we've already assumed as being difficult and thus expensive.
n cost(P) = S + ∑ ei p'i pi i = 1
n cost(P) = ∑ si + ei p'i pi i = 1
We can speculate that for a simple good, where sigma is summed over one event, p' being likely less than one, and self-analysis likely being cheap, it is likely that (1-p')*e*p will be greater than s. That is the cost of checking out the good is less than any unexpected risk from the supplier not doing the analysis. The good P then comes out ahead.
Yet, p' is probably somewhere above zero for every event and every good, as uncertainty is generally pervasive rather than particular. Likewise, self-analysis is a function of complexity, and is related to n the size of the set of events. Then, the more complex the good, and the higher the n, then the higher the initial costs of self-analysis. As the self-analysis is incurred as a cost up front, and the savings from any perfect benefit are discounted into the future, we will reach a point as complexity increases where the cost of self-analysis exceeds the combined benefits of the probability reductions in events.
Then, we can say that for a good exceeding some complexity fulcrum, the cost of P exceeds its competitor R through the uncertainty factor.
This model may shed light on why SSH is so much cheaper than SSL. Both of them are equally complex, and both of them perform similar and highly comparable tasks. The former is R, and clearly describes its weaknesses. Prime amongst this is the man-in-the-middle attack, which occurs when an attacker slides into the middle, and plays both ends. SSH is vulnerable to this, and makes no bones about it.
SSL on the other hand makes a great play of its complete security model. When employed correctly, SSL advertises no known security weaknesses. This has inspired countless security researchers to investigate its model, and weaknesses have been found. These weaknesses are more or less severe as is the weakness displayed by SSH, but what is striking is that the perfection strived for by SSL increases the cost of serious purchasers in analysing themselves just how secure the product is.
Which leads to a plausible direction in security signalling - that system that clearly describes its weaknesses is likely to be cheaper, at least if it is complex enough to raise significant costs in buyer analysis.
Some lessones may be found in the school of open source, where open is meant as open availability in the tradition of the Internet. Then, there are no property rights, confidentiality barrers, and limits to commercially vested interests. This may mean that parties can more easily breach the barrier of information in security goods.
Indeed, open source has done well in security. Especially, in cryptography, it has dominated. Relatively few successes in closed source cryptography exist, and many stories of humiliation [OS]. Future directions would include the correlation of security success and open source, and the effect of openness on efficiency of information.
[AS1] Adam Shostack, Ratty Signals Emergent Chaos blog, 01 Jan 2005.
[AS2] Adam Shostack, Ratty Signals Emergent Chaos blog, 01 Jan 2005.
[IG1] Ian Grigg, Security Signalling - the market for Lemmings Financial Cryptography blog, 02 Jan 2005.
[AS] Lawrence A. Gordon and Robert Richardson, " InfoSec Economics security pipeline, 15th April 2004.
[ML] Andrew Orlowski, " PayPal founder on Google's Wallet," The Register, 24th June 2005.
[xx] Tony Vila, Rachel Greenstadt, David Molnar, " Why Can't we be Bothered to Read Privacy Policies" 2nd Economics & Security Workshop.
[x2] http://www.honeynet.org/papers/index.html " Linux servers safer than ever"
[OS] RC4 was reverse-engineered and released as ARC4 by an anonymous poster. The DVD security was cracked by a teenager who disassembled the program so as to decrypt and play movies on his Linux machine. GSM's closed system was cracked by probing the SIMs in order to reveal bits of information. After 3 months of probing, the secret algorithm was cracked in an afternoon by grad students.