Latest news of the domain name industry

Recent Posts

Are .mail, .home and .corp safe to launch? Applicants think so

Kevin Murphy, August 28, 2016, Domain Tech

ICANN should lift the freeze on new gTLDs .mail, .home and .corp, despite fears they could cause widespread disruption, according to applicants.
Fifteen applicants for the strings wrote to ICANN last week to ask for a risk mitigation plan that would allow them to be delegated.
The three would-be gTLDs were put on hold indefinitely almost three years ago, after studies determined that they were at risk of causing far more “name collision” problems than other strings.
If they were to start resolving on the internet, the fear is they would lead to problems ranging from data leakage to systems simply stopping working properly.
Name collisions are something all new TLDs run the risk of creating, but .home, .corp and .mail are believed to be particularly risky due to the sheer number of private networks that use them as internal namespaces.
My own ISP, which has millions of subscribers, uses .home on its home hub devices, for example. Many companies use .corp and .mail on their LANs, due to longstanding advice from Microsoft and the IETF that it was safe to do so.
A 2013 study (pdf) showed that .home received almost 880 million DNS queries over a 48-hour period, while .corp received over 110 million.
That was vastly more than other non-existent TLDs.
For example, .prod (which some organizations use to mean “production”) got just 5.3 million queries over the same period, and when Google got .prod delegated two years it prompted an angry backlash from inconvenienced admins.
While .mail wasn’t quite on the same scale as the other two, third-party studies determined that it posed similar risks to .home and .corp.
All three were put on hold indefinitely. ICANN said it would ask the IETF to consider making them officially reserved strings.
Now the applicants, noting the lack of IETF movement to formally freeze the strings, want ICANN to work on a thawing plan.
“Rather than continued inaction, ICANN owes applicants for .HOME, .CORP, and .MAIL and the public a plan to mitigate any risks and a proper pathway forward for these TLDs,” the applicants told ICANN (pdf) last Wednesday.
A December 2015 study found that name collisions have occurred in new gTLDs, but that no truly serious problems have been caused.
That does not mean .home, .corp and .mail would be safe to delegate, however.

It’s official: new gTLDs didn’t kill anyone

Kevin Murphy, December 2, 2015, Domain Tech

The introduction of new gTLDs posed no risk to human life.
That’s the conclusion of JAS Advisors, the consulting company that has been working with ICANN on the issue of DNS name collisions.
It is final report “Mitigating the Risk of DNS Namespace Collisions”, published last night, JAS described the response to the “controlled interruption” mechanism it designed as “annoyed but understanding and generally positive”.
New text added since the July first draft says: “ICANN has received fewer than 30 reports of disruptive collisions since the first delegation in October of 2013. None of these reports have reached the threshold of presenting a danger to human life.”
That’s a reference to Verisign’s June 2013 claim that name collisions could disrupt “life-supporting” systems such as those used by emergency response services.
Names collisions, you will recall, are scenarios in which a newly delegated TLD matches a string that it is already used widely on internal networks.
Such scenarios could (and have) led to problems such as system failure and DNS queries leaking on to the internet.
The applied-for gTLDs .corp and .home have been effectively banned, due to the vast numbers of organizations already using them.
All other gTLDs were obliged, following JAS recommendations, to redirect all non-existent domains to 127.0.53.53, an IP address chosen to put network administrators in mind of port 53, which is used by the DNS protocol.
As we reported a little over a year ago, many administrators responded swearily to some of the first collisions.
JAS says in its final report:

Over the past year, JAS has monitored technical support/discussion fora in search of posts related to controlled interruption and DNS namespace collisions. As expected, controlled interruption caused some instances of limited operational issues as collision circumstances were encountered with new gTLD delegations. While some system administrators expressed frustration at the difficulties, overall it appears that controlled interruption in many cases is having the hoped-for outcome. Additionally, in private communication with a number of firms impacted by controlled interruption, JAS would characterize the overall response as “annoyed but understanding and generally positive” – some even expressed appreciation as issues unknown to them were brought to their attention.

There are a number of other substantial additions to the report, largely focusing on types of use cases JAS believes are responsible for most name collision traffic.
Oftentimes, such as the random 10-character domains Google’s Chrome browser uses for configuration purposes, the collision has no ill effect. In other cases, the local system administrators were forced to remedy their software to avoid the collision.
The report also reveals that the domain name corp.com, which is owned by long-time ICANN volunteer Mikey O’Connor, receives a “staggering” 30 DNS queries every second.
That works out to almost a billion (946,728,000) queries per year, coming when a misconfigured system or inexperienced user attempts to visit a .corp domain name.

Controlled interruption as a means to prevent name collisions [Guest Post]

Jeff Schmidt, January 8, 2014, Domain Tech

This is a guest post written by Jeff Schmidt, CEO of JAS Global Advisors LLC. JAS is currently authoring a “Name Collision Occurrence Management Framework” for the new gTLD program under contract with ICANN.
One of JAS’ commitments during this process was to “float” ideas and solicit feedback. This set of thoughts poses an alternative to the “trial delegation” proposals in SAC062. The idea springs from past DNS-related experiences and has an effect we have named “controlled interruption.”
Learning from the Expired Registration Recovery Policy
Many are familiar with the infamous Microsoft Hotmail domain expiration in 1999. In short, a Microsoft registration for passport.com (Microsoft’s then-unified identity service) expired Christmas Eve 1999, denying millions of users access to the Hotmail email service (and several other Microsoft services) for roughly 20 hours.
Fortunately, a well-intended technology consultant recognized the problem and renewed the registration on Microsoft’s behalf, yielding a nice “thank you” from Microsoft and Network Solutions. Had a bad actor realized the situation, the outcome could have been far different.
The Microsoft Hotmail case and others like it lead to the current Expired Registration Recovery Policy.
More recently, Regions Bank made news when its domains expired, and countless others go unreported. In the case of Regions Bank, the Expired Registration Recovery Policy seemed to work exactly as intended – the interruption inspired immediate action and the problem was solved, resulting in only a bit of embarrassment.
Importantly, there was no opportunity for malicious activity.
For the most part, the Expired Registration Recovery Policy is effective at preventing unintended expirations. Why? We call it the application of “controlled interruption.”
The Expired Registration Recovery Policy calls for extensive notification before the expiration, then a period when “the existing DNS resolution path specified by the Registrant at Expiration (“RAE”) must be interrupted” – as a last-ditch effort to inspire the registrant to take action.
Nothing inspires urgent action more effectively than service interruption.
But critically, in the case of the Expired Registration Recovery Policy, the interruption is immediately corrected if the registrant takes the required action — renewing the registration.
It’s nothing more than another notification attempt – just a more aggressive round after all of the passive notifications failed. In the case of a registration in active use, the interruption will be recognized immediately, inspiring urgent action. Problem solved.
What does this have to do with collisions?
A Trial Delegation Implementing Controlled Interruption
There has been a lot of talk about various “trial delegations” as a technical mechanism to gather additional data regarding collisions and/or attempt to notify offending parties and provide self-help information. SAC062 touched on the technical models for trial delegations and the related issues.
Ideally, the approach should achieve these objectives:

  • Notifies systems administrators of possible improper use of the global DNS;
  • Protects these systems from malicious actors during a “cure period”;
  • Doesn’t direct potentially sensitive traffic to Registries, Registrars, or other third parties;
  • Inspires urgent remediation action; and
  • Is easy to implement and deterministic for all parties.

Like unintended expirations, collisions are largely a notification problem. The offending system administrator must be notified and take action to preserve the security and stability of their system.
One approach to consider as an alternative trial delegation concept would be an application of controlled interruption to help solve this notification problem. The approach draws on the effectiveness of the Expired Registration Recovery Policy with the implementation looking like a modified “Application and Service Testing and Notification (Type II)” trial delegation as proposed in SAC62.
But instead of responding with pointers to application layer listeners, the authoritative nameserver would respond with an address inside 127/8 — the range reserved for localhost. This approach could be applied to A queries directly and MX queries via an intermediary A record (the vast majority of collision behavior observed in DITL data stems from A and MX queries).
Responding with an address inside 127/8 will likely break any application depending on a NXDOMAIN or some other response, but importantly also prevents traffic from leaving the requestor’s network and blocks a malicious actor’s ability to intercede.
In the same way as the Expired Registration Recovery Policy calls for “the existing DNS resolution path specified by the RAE [to] be interrupted”, responding with localhost will hopefully inspire immediate action by the offending party while not exposing them to new malicious activity.
If legacy/unintended use of a DNS name is present, one could think of controlled interruption as a “buffer” prior to use by a legitimate new registrant. This is similar to the CA Revocation Period as proposed in the New gTLD Collision Occurrence Management Plan which “buffers” the legacy use of certificates in internal namespaces from new use in the global DNS. Like the CA Revocation Period approach, a set period of controlled interruption is deterministic for all parties.
Moreover, instead of using the typical 127.0.0.1 address for localhost, we could use a “flag” IP like 127.0.53.53.
Why? While troubleshooting the problem, the administrator will likely at some point notice the strange IP address and search the Internet for assistance. Making it known that new TLDs may behave in this fashion and publicizing the “flag” IP (along with self-help materials) may help administrators isolate the problem more quickly than just using the common 127.0.0.1.
We could also suggest that systems administrators proactively search their logs for this flag IP as a possible indicator of problems.
Why the repeated 53? Preserving the 127.0/16 seems prudent to make sure the IP is treated as localhost by a wide range of systems; the repeated 53 will hopefully draw attention to the IP and provide another hint that the issue is DNS related.
Two controlled interruption periods could even be used — one phase returning 127.0.53.53 for some period of time, and a second slightly more aggressive phase returning 127.0.0.1. Such an approach may cover more failure modes of a wide variety of requestors while still providing helpful hints for troubleshooting.
A period of controlled interruption could be implemented before individual registrations are activated, or for an entire TLD zone using a wildcard. In the case of the latter, this could occur simultaneously with the CA Revocation Period as described in the New gTLD Collision Occurrence Management Plan.
The ability to “schedule” the controlled interruption would further mitigate possible effects.
One concern in dealing with collisions is the reality that a potentially harmful collision may not be identified until months or years after a TLD goes live — when a particular second level string is registered.
A key advantage to applying controlled interruption to all second level strings in a given TLD in advance and at once via wildcard is that most failure modes will be identified during a scheduled time and before a registration takes place.
This has many positive features, including easier troubleshooting and the ability to execute a far less intrusive rollback if a problem does occur. From a practical perspective, avoiding a complex string-by-string approach is also valuable.
If there were to be a catastrophic impact, a rollback could be implemented relatively quickly, easily, and with low risk while the impacted parties worked on a long-term solution. A new registrant and associated new dependencies would likely not be adding complexity at this point.
Request for Feedback
As stated above, one of JAS’ commitments during this process was to “float” ideas and solicit feedback early in the process. Please consider these questions:

  • What unintended consequences may surface if localhost IPs are served in this fashion?
  • Will serving localhost IPs cause the kind of visibility required to inspire action?
  • What are the pros and cons of a “TLD-at-once” wildcard approach running simultaneously with the CA Revocation Period?
  • Is there a better IP (or set of IPs) to use?
  • Should the controlled interruption plan described here be included as part of the mitigation plan? Why or why not?
  • To what extent would this methodology effectively address the perceived problem?
  • Other feedback?

We anxiously await your feedback — in comments to this blog, on the DNS-OARC Collisions list, or directly. Thank you and Happy New Year!

Demystifying DITL Data [Guest Post]

Kevin White, November 16, 2013, Domain Tech

With all the talk recently about DNS Namespace Collisions, the heretofore relatively obscure Day In The Life (“DITL”) datasets maintained by the DNS-OARC have been getting a lot of attention.
While these datasets are well known to researchers, I’d like to take the opportunity to provide some background and talk a little about how these datasets are being used to research the DNS Namespace Collision issue.
The Domain Name System Operations Analysis and Research Center (“DNS-OARC”) began working with the root server operators to collect data in 2006. The effort was coined “Day In The Life of the Internet (DITL).”
Root server participation in the DITL collection is voluntary and the number of contributing operators has steadily increased; in 2010, all of the 13 root server letters participated. DITL data collection occurs on an annual basis and covers approximately 50 contiguous hours.
DNS-OARC’s DITL datasets are attractive for researching the DNS Namespace Collision issue because:

  • DITL contains data from multiple root operators;
  • The robust annual sampling methodology (with samples dating back to 2006) allows trending; and
  • It’s available to all DNS-OARC Members.

More information on the DITL collection is available on DNS-OARC’s site at https://www.dns-oarc.net/oarc/data/ditl.
Terabytes and terabytes of data
The data consists of the raw network “packets” destined for each root server. Contained within the network packets are the DNS queries. The raw data consists of many terabytes of compressed network capture files and processing the raw data is very time-consuming and resource-intensive.
[table id=20 /]
While several researchers have looked at DITL datasets over the years, the current collisions-oriented research started with Roy Hooper of Demand Media. Roy created a process to iterate through this data and convert it into intermediate forms that are much more usable for researching the proposed new TLDs.
We started with his process and continued working with it; our code is available on GitHub for others to review.
Finding needles in DITL haystacks
The first problem faced by researchers interested in new TLDs is isolating the relatively few queries of interest among many terabytes of traffic that are not of interest.
Each root operator contributes several hundred – or several thousand – files full of captured packets in time-sequential order. These packets contain every DNS query reaching the root that requests information about DNS names falling within delegated and undelegated TLDs.
The first step is to search these packets for DNS queries involving the TLDs of interest. The result is one file per TLD containing all queries from all roots involving that TLD. If the input packet is considered a “horizontal” slice of root DNS traffic, then this intermediary work product is a “vertical” slice per TLD.
These intermediary files are much more manageable, ranging from just a few records to 3 GB. To support additional investigation and debugging, the intermediary files that JAS produces are fully “traceable” such that a record in the intermediary file can be traced back to the source raw network packet.
The DITL data contain quite a bit of noise, primarily DNS traffic that was not actually destined for the root. Our process filters the data by destination IP address so that the only remaining data is that which was originally destined for the root name servers.
JAS has made these intermediary per-TLD files available to DNS-OARC members for further analysis.
Then what?
The intermediary files are comparatively small and easy to parse, opening the door to more elaborate research. For example, JAS has written various “second passes” that classify queries, separate queries that use valid syntax at the second level from those that don’t, detect “randomness,” fit regular expressions to the queries, and more.
We have also checked to confirm that second level queries that look like Punycode IDNs (start with ‘xn--‘) are valid Punycode. It is interesting to note the tremendous volume of erroneous, technically invalid, and/or nonsensical DNS queries that make it to the root.
Also of interest is that the datasets are dominated by query strings that appear random and/or machine-generated.
Google’s Chrome browser generates three random 10-character queries upon startup in an effort to detect network properties. Those “Chrome 10” queries together with a relatively small number of other common patterns comprise a significant proportion of the entire dataset.
Research is being done in order to better understand the source of these machine-generated queries.
More technical details and information on running the process is available on the DNS-OARC web site.

This is a guest post written by Kevin White, VP Technology, JAS Global Advisors LLC. JAS is currently authoring a “Name Collision Occurrence Management Framework” for the new gTLD program under contract with ICANN.