[ietf-dkim] Introducing myself
jon at callas.org
Tue Oct 31 01:26:57 PST 2006
On 30 Oct 2006, at 1:42 PM, Charles Lindsey wrote:
I'm going to cherry-pick a few things, particularly the ones I think
I'm best suited to answer.
>> 3.2 Tag=Value Lists
>> INFORMATIVE IMPLEMENTATION NOTE: Although the "plain text"
>> defined below (as "tag-value") only includes 7-bit
>> characters, an
>> implementation that wished to anticipate future standards
>> would be
>> advised to not preclude the use of UTF8-encoded text in
> Those future standards are nearer than you think. The currently active
> ietf-eai WG is charged with producing an experimental protocol for
> headers in UTF-8. Would it not be wiser to make support for arbitrary
> octets (except those essential for parsing such as ";", CR, LF, etc) a
> MUST accept right from the start?
No, because we want DKIM to work with 7-bit-clean mail. We want
broad, fast deployment and that means working with old systems.
>> 3.3.3 Other algorithms
> Presumably there is nothing to prevent allowing PGP as the signing
> algorithm in the future, if someone makes out a good case for it.
As Eliot has noted, PGP isn't a signing algorithm. PGP is a signing
protocol. Actually, to be even more correct, OpenPGP is a signing
protocol. PGP is software.
Nonetheless, DKIM is specifically designed to be orthogonal to
OpenPGP, S/MIME, or anything else. If you want to sign the content of
a message, they're appropriate for it.
>> 3.5 The DKIM-Signature header field
> Although your charter forbids you from discussing non-repudiation,
> authorization, and other matters not strictly relevant for DKIM, it
> is to
> be envisaged that other applications will arise from time to time
> requiring signatures over headers, and it would be unfortunate if each
> such application had to invent Yet-Another-Signing-Protocol when a
> adaptation of what you have written would have sufficed. There are
> too many only-slightly-different-wheels in existence for us to be
> inventing any more. Surely, a facility for signing headers should be
> described as a tool which can then be used for various applications in
> future, of which DKIM would be just the first? So why was this
> not taken?
> In fact, you almost made it. The only features which might make it
> for future applications that I can see are the appearance of "DKIM" in
> your newly invented "DKIM-Signature" header (it rather needs an
> 'application' tag in the signature to indicate why the signature was
> made), and the insistence that the d= and s= tags, which together
> the owner of the key, should be syntactically of the form of domain-
> (which might be totally inappropriate for those other applications,
> it should clearly be required when the application is DKIM).
> Can the various tags appear in this header in any order? OTOH, why is
> there not an insistence that the b= tag should come last (since it
> has to
> be easily joined to and separated from the rest)?
>> h= Signed header fields
> Why MUST NOT this list be empty? Suppose you want to sign the body,
> not any headers? Unusual, but perhaps sensible for some
> application. No
> interoperability arises.
Because you have to sign at least one header. Think of DKIM as a
header-signing system. It signs the body, too, but that's a means to
At the risk of using a postal metaphor (since email is surprisingly
different beast from postal mail), DKIM is an integrity protocol for
the envelope, not for the letter. Mechanically, it has to sign the
body, but that's again, not the goal.
>> i= Identity of the user or agent
> Must this tag, if a <local-part> is present, be a valid working email
No, it can be anything you want. It's really a note from the signing
domain to itself. Here's a scenario. You ring me up and tell me that
one of my users is misbehaving in email. You show me the email. I use
the i= to know whose knuckles to rap. But I may do so in a way that's
completely opaque to you.
>> l= Body length count
>> INFORMATIVE IMPLEMENTATION WARNING:
> I am very suspicious of the propriety of suggesting, in any IETF
> that it is legitimate to remove text from a message being conveyed
> (certainly without the consent of the recipient). Surely marking it
> blood-red ink, or warnings in 32pt characters is as far as one
> should go?
The point of body lengths is that many systems add text to the end of
a message. In fact, the list server that is delivering this to you is
doing precisely that.
If the signing server (callas.org) sends it to the mailing list
(mipassoc.org) which then sends it to you, how do you know what the
mailing list added? That is the length.
The length is the opposite of what you think it is. It is an explicit
declaration by the responsible domain of what it is responsible for.
If a spammer adds stuff at the end, it should be removed. It is
saying that text that the author did not put there is not the
author's text and thus may be removed. (I'm playing fast and loose a
bit here, because it's not the author it's the author's exit mail
>> q= A colon-separated list of query methods used to retrieve the
>> public key
> Clearly, the use of DNS or some similar global database is the only
> sensible PKI that is workable for DKIM. But am I right in saying
> that this
> tag does not preclude the use of other PKIs for other applications
> attached certificates, web-of-trust, private agreements between the
> communicating parties, etc.)?
A couple of comments on this one.
DKIM is not a PKI. It has no trust model. It is a key-centric system.
There's nothing wrong with embodying those keys in certificates or
even gaffer tape, but that's not part of DKIM. However, yes, you are
right, it doesn't preclude using anything else.
Second, PKIs are not distribution mechanisms. DNS is a distribution
mechanism. Other possible mechanisms include LDAP, HTTP, FTP, Finger,
> Why MUST signers support "dns/ext" (clearly, verifiers MUST)? Surely a
> signer who, as a matter of policy, always chooses to use some other
> method, is not obliged to implement something he is never going to
Then they're not doing DKIM. If you're doing DKIM, you MUST do DNS.
Conceivably, you might implement other mechanisms, too, but DNS is
the one distribution mechanism everyone has to do.
>> t= Signature Timestamp
>> ... The format is the number of seconds
>> since 00:00:00 on January 1, 1970 in the UTC time zone. ...
> Strictly speaking not true, since the usual UNIX algorithm for
> this quantity takes no account of leap seconds. I presume this is
> all laid
> down in POSIX somewhere.
> And expecting this to work up to AD 200,000 seems an overkill (though
> beyond 2038 would be helpful).
This is a definition. It has nothing to do with unix.
>> 3.6.1 Textual Representation
>> h= Acceptable hash algorithms
>> ... Signers and Verifiers MUST
>> support the "sha256" hash algorithm. Verifiers MUST also
>> the "sha1" hash algorithm.
> Why MUST signers support the "sha256" hash algorithm (clearly,
> MUST)? Surely a signer who, as a matter of policy, always chooses
> to use
> sha-1 is not obliged to implement something he is never going to use?
It's for interoperability.
Here's what's going on there. SHA-1 is broken. It isn't so broken
that we're going to demand that no one use it, but it's broken. We
*want* you to use SHA-256. However, we are making a concession to
people who have some need or desire to use SHA-1 and making sure it
will all still work.
>> k= Key type (plain-text; OPTIONAL, default is "rsa"). Signers
>> verifiers MUST support the "rsa" key type.
> Why MUST signers support the "rsa" key type (clearly, verifiers MUST)?
> Surely a signer who, as a matter of policy, always chooses to use some
> other key type is not obliged to implement something he is never
> going to
For interoperability. You have to have a key type that *everyone*
implements. Note that in this standard like many others, mandatory-to-
implement does not mean mandatory-to-use. It's perfectly fine for a
user to decide that they're only ever going to use ECDSA, and never
RSA. But the software that they use has to implement RSA because
other people are.
>> 5.3 Normalize the Message to Prevent Transport Conversions
> I found this section absolutely astounding.
> Message bodies written in Non-ASCII charsets have been commonplace
> now for
> 12 or more years, and they are most readily represented as 8-bit.
> has been around for the same length of time and is now almost
> deployed. 8bit using 8BITMIME has become, or is well on the way to
> becoming, the preferred CTE for charsets which will not fit into
> And yet you are now seriously proposing, for a protocol that needs
> to be
> used in the great majority of future email messages if it is to
> its purpose, to return to encodings that can be squashed into
> 7bits. That
> is one monumental step backwards for the IETF.
You're confusing making sure that 7-bit systems don't break with
encouraging them. We're not encouraging them. But we're being
realistic and want DKIM to work with existing systems.
> Sorry to have gone on at such length. But some of these issues seem of
> importance, so I hope someone will take the trouble to reply.
It's all right. Sorry for being a bit abrupt, but you need to catch
up to what we're doing.
Go look at the web site at www.dkim.org, which will have a lot of
things in it to help you get up to speed. Look at the DKIM WG charter
at <http://www.ietf.org/html.charters/dkim-charter.html> which has
pointers to the WG documents.
You especially should look at the DKIM threats RFC, RFC 4686. Also
after that, look at the DKIM Service Overview. Those should help you
understand a lot of the mechanism that's in DKIM-base.
More information about the ietf-dkim