[governance] ICANN Board Vote Signals Era of Censorship in Domain Names
Karl Auerbach
karl at cavebear.com
Thu Apr 5 00:27:30 EDT 2007
Vittorio Bertola wrote:
> However, on resources that
> are not infinite (and sure, TLDs now are artificially scarce, but would
> not be infinite anyway)
To make that somewhat quantitative:
Peter Deutsch (formerly of Bunyip and Cisco) and I ran an experiment a few
years ago, on what today would be considered pretty wimpy computers.
We took the .com zone that we had (I can't remember how we got it, or when, but
it had a lot of of names in it.)
We elevated those names so that they were each a TLD.
We loaded it into bind and measured performance. We had to add memory to the
machine but once we had enough, it loaded and response was amazingly good.
Then I wrote a program to generate synthetic root zone files so that I could
create root zones of any size and with a mix of character-string lengths (just
to make sure I didn't accidentally get an advantage from some hashing mechanism
somewhere).
And then I had another program that generated queries, both hits on names that
were in the zone file and misses on names that were not. I could adjust the
hit/miss ratio. We measured responsivity and query loss. I don't think we did
any really heavy traffic loads because we made the assumption that UDP based
DNS queries could be spanned across multiple servers using standard
load-balancing front-ends.
We got into the millions and millions of TLDs, but never found an upper bound.
We did not follow our scientific training - we didn't keep good lab notes and
our observations were more subjective than objective - so to do it right we'd
need to locate and resurrect the pieces and do it again.
So take the result with a grain of salt, but it does appear that we can readily
have tens upon tens of millions of TLDs without the server software melting.
The limit on TLDs is probably more based on the chance of administrative error
rather than on a hard technical limit. But .com teaches us that it is possible
to administer a rather large (60,000,000 name zone) without a lot of
administrative errors.
If we assume a rather low number, well within the range of both the technology,
the software, and administrative processes: 1,000,000 TLDs.
If we aim at that number, we could allocate 10,000 per year and we would hit
that highly conservative limit of today in year 2107, a century from now.
So I think it is useful to drop the conceit that we need to conserve TLDs until
the numbers reach at least 4 orders of magnitude greater than the 200..300 TLDs
that exist today.
However, since even a billion TLDs would be less than the number of people who
might want one, we do need some mechanism to allocate. And that would bring us
to the auction/lottery discussions we had a while back and also to some very
good academic analysis of auctions and lottery methods of allocating TLDs.
--karl--
____________________________________________________________
You received this message as a subscriber on the list:
governance at lists.cpsr.org
To be removed from the list, send any message to:
governance-unsubscribe at lists.cpsr.org
For all list information and functions, see:
http://lists.cpsr.org/lists/info/governance
More information about the Governance
mailing list