Task Backlog

This page is to capture tasks that have been identified but are not the subject of a current sprint. The tasks are categorised by broad component/user story. Note that the tasks here are the basic tasks; the associated review and documentation tasks are not included. Where a complexity estimate has been made, it is included.


This section covers tasks that are common to both the A- and R-teams.

Requirements List
The authoritative and recursive servers are subject to a large number of RFCs, many of which have explicit RFC 2119 (SHOULD, MUST etc.) requirements and others which indicate best current practice. In addition, a number of features have been added to BIND over the years (either as enhancements, as a result of bug reports or at customer request) that should be incorporated into BIND10. (A partial list of BIND-9 features can be found here.) The result is that at some point we will need to a check for completeness and feed that into the system test.

  • Working through DNS RFCs and identifying MUST/SHOULD etc requirements
  • Working through BIND-9 configuration options and identifying specific features
  • Working through BIND-9 bug/enhancement list and identifying specific features

Refactoring: put in library for DNS services
Extracting common code from the authoritative and recursive servers into a common library.

Logging Framework
Requirements and design and implementation are already in the current R-Team sprint. Other tasks are:

  • Review existing code and add logging calls (Python/C++)

Offline configuration and/or standard initial profiles
At present the only way to modify the configuration is to start BIND-10 and run bindctl. As delivered, the configuration starts the authoritative server; so in order to run a recursive server a user must start the authoritative server, alter the configuration parameters and restart BIND-10. The task of stopping/starting components has been added here.

  • Definition (and implementation) of API
  • Support for reading spec-files outside of the configuration manager
    • this may include having them in a known location (done for installed, but they are spread throughout the source tree)
    • checking if system is running, connect to that if so, otherwise load the config db and write it
    • if we do this with bindctl we need to think what form this would take :)
  • Allow start/stopping authoritative server/recursor through bindct

DNSSEC Key Interface
Many top-level domains use HSMs to store their keys and to sign/check the signatures. Existing BIND code uses key files. Ideally we want a single abstract key store to which a variety of storage mechanisms can be connected. PKCS#11 is the default for HSMs (and that would also allow use of SoftHSM); however some sites may want to continue with the key file idea, in which case a PKCS#11-style interface to those files would simplify the code that uses keys.

  • HSM interface
    • Key generation
    • Key addition/removal/listing
  • File interface
    • Key generation
    • Key addition/removal/listing

Note: The issue goes deeper than a single HSM as the issue of HSM replacement - and the movement of keys between HSMs - must be considered. (For reference, OpenDNSSEC allows simultaneous access to multiple HSMs, the idea being that when an HSM is retired, if keys can't be transferred to another HSM a key roll takes place, the new key coming from the new HSM.)

Notes (from Jelte): OpenDNSSEC has quite a sexy interface for this; the only thing you pass around are key identifiers, and it figures out by itself which HSM that is to be used in. I suggest we take a similar approach (i.e. not only use PKCS#11 as the general backend interface, but also abstract away from HSM's in use and configuration).

Common library layer for the cryptographic library.

  • Design
  • Implementation

Notes: We need this for TSIG, in CPU signing, DNSSEC validation

ASIO Performance
The ASIO code introduced in #327 has caused a slow-down in the performance of the authoritative server. This might be down to frequent memory allocations.

  • Investigate case of poor performance
  • Corrective action

Write a master file parser using new Python bindings

  • Put parser in DNS library

Authoritative Server

Performance Improvement

Build with some kind of in-memory data source

  • what we need here is a very simplified version of rbtdb.c, meaning: no NSEC3 consideration (at the moment), no versioning, no cache
  • provide simple add/remove/search operations (but search is not so trivial. we need consideration for delegation, DNAME, etc)
  • how to maintain RRSIGs may be different
  • we'll need a "DB iterator" feature for xfrout, but it should probably be deferred to subsequent sprints.
  • Like rbt, my experimental work on the experiments/jinmei-onmemdb branch may help.

Estimate: 14

Zone Transfers

How to handle zone transfer with in-memory data source: design
How to notify the authoritative server (from xfrin) of the need for reload how to reload the new version of zone without disrupting the services (spawn a separate thread (which may not work well depending on thread scheduling), use a multi-process approach like NSD, use internal events and incremental loading, etc).
Estimate: 5

How to handle zone transfer with in-memory data source: implementation
This should take place after the design has been completed.
Estimate: 8



IXFR Processing

High-Level Design

  • Whether to use a separate program or have b10-auth handle ixfr-out
  • How to notify b10-auth of the change and have it server the new data"



Notion of a journal file (differentiation system)

  • provide abstract level interface to handle DNS ""diffs"" via IXFR-in/dynamic updates + to write transaction for IXFR-in/dynamic updates + to iterate over the journal file for a specific set of changes for IXFR-out independently from the underlying data source implementation, whether it's a real ""journal file"" for in memory data source or
  • we should probably begin with Jelte's work for writable data source API for this purpose

Journal file for in-memory implementation (port from BIND 9)
See bind9/lib/dns/journal.c:

  • define file format
  • provide interfaces to basic operations: open/close/read/write/seek
  • provide interfaces to DNS operations:
    • to write transaction for IXFR-in/dynamic updates
    • to iterate over the journal file for a specific set of changes for IXFR-out

Component to handle IXFR-out (renamed from "component to manage zone information")
This should be quite straightforward once we complete the journal interface:

  • it parses an incoming IXFR request, extracts the corresponding diff from the journal file, makes a response based on that, and returns it to the requester.

Component to handle IXFR-in

  • update to xfrin so that it can support IXFR
  • add ability to update the journal file based on the IXFR response. it should be quite straightforward once we complete the journal interface.
  • add ability to notify b10-auth of the update

If we use in-memory data source, we need to make it writable

  • more complete port of BIND 9's rbtdb
  • with on the fly adding/deleting entries
  • possibly with versioning

Query Processing

Wildcard processing
Task description TBD. This will be a non trivial task and may have to be divided into several sub tasks

Type ANY query processing
To implement this, we need a way to iterate over all RRsets in a zone node, which will not be a trivial task. This task may have to be divided into several sub tasks (e.g., a kind of "iterator" for the memory zone class and the query handling logic).

Zone Loading and Dumping

Zone loader

  • parse a zone file, and load the content to the in memory data source.
  • see bind9/lib/dns/master.c, but we should begin with much simplified version, assuming a simple line-by-line format, no $TTL etc, no "raw" format, no NS/MX checks, and so on.

Estimate: 8

Zone dumper in XFRIN

  • it dumps the content of incoming AXFR to the standard master file format
  • see also bind9/lib/dns/masterdump.c, but for the very first version, we may simply dump the incoming data to a file, just like the current xfrin implementation.

Estimate: 3


Basic design for protocol operation
Both sign and verify.

Quick hack back end
Assume either crypto++ or OpenSSL.

Implementation of TSIG Protocol
Essentially, we only need to two "functions": sign and verify based on RFC2845. we can also refer to corresponding BIND 9's implementation: dns_tsig_sign() and dns_tsig_verify() (defined in bind9/lib/dns/tsig.c) these two could be divided if it's necessary or convenient.

User Interface
For the initial sprint, we should probably begin with xfrin and xfrout specifically:

  • add an interface to specify a key for a specific primary server (in the case of xfrin)
  • add an interface to specify a key for a specific secondary server (in the case of xfrout)

Update to xfrin
Use the TSIG API to sign/verify the transactions, using the result of the TSIG implementation/user interface tasks.

Update to xfrout
Use the TSIG API to sign/verify the transactions, using the result of the TSIG implementation/user interface tasks.


Zone signing
Signing a zone file

  • NSEC
  • NSEC3
  • Continuous signing

Handling DO=1 queries

  • Returning signature information
  • Returning NSEC/NSEC3 information

Signing mechanisms

  • Accessing key for signing in CPU
    • Performing the signing operation
  • Signing in the HSM

Notes: The basic key-related tasks for the authoritative server; with the key we should be able to sign the key using the host computer. However, if the HSM offers it, we should be prepared to pass the information to the HSM. As an aside, some HSMs can only realise their full potential using multiple threads.

Key management

  • Single key of multiple zones and Multiple keys, one per zone
  • Key rollover
  • DS interaction with the parent

Notes: Assuming that the server is going to serve multiple zones, users may want to have one key per zone or have one key server multiple zones. In addition, there will be a need for managing key rolls - pre-introducing new keys and retaining old keys in the zone for a sufficient time to allow RRSIGs in caches to expire.

Recursive Server

Priming query

  • Priming query logic

Notes: the first query a resolver makes is a priming query. There is an (expired) internet-draft on the subject:

Requirements, API design and the writing of tests for the API design are already in the curent sprint. Other tasks are:

  • Internals design
    • Handling TTL expiration/cache cleaning
    • Caching negative responses
    • When to cache glue v when to cache authoritative data
    • Cache persistence - dumping and loading
      • Inspection tools
    • Pre-load cache?
      • Testing (can set up cache in particular configuration)
      • Authoritative server on same system (can load authoritative data into cache)

Notes: The cache design will be key to the performance of the resolver. We need to spend some time on getting a good design. I've included in this task cache persistence which seems to be a definite requirement and time for writing inspection tools (useful for debugging and later support). I have also added the idea of pre-loading the cache - perhaps by writing something that converts a zone file into a cache dump file format. I see two uses for it: the first is for testing - allowing us to set up particular cache configurations for particular tests. The second is when an authoritative and recursive server run on the same system; one idea was that the recursive server answers all queries, with the authoritative data being pre-loaded into its cache (and marked with a "do not delete" flag).

Nameserver address store

  • Optimisation to have only 2 outstanding queries on the network per zone. Currently, ZoneEntry asks all its NameserverEntry objects to obtain their IP addresses. If we count packets on network and never go over 4 (2 are per query, A and AAAA, they need to be counted separately), it should work, but a mechanism to fetch everything that is already in cache would be nice.
  • Put some logging there, currently NSAS is completely silent.
  • Dispatching of callbacks in exception-safe manner. See TODO file for details.
  • Zone entry hash table should have multiple LRU lists, each for a subset of the hash cells, to distribute the load between multiple locks (this way we could get lock contention easily, with bottleneck on the LRU mutex).
  • Remove LRU from nameserver hash and use weak pointers there (eliminates duplicate nameserver entries and every existing will be reachable, details in TODO).
  • NSAS persistence - dumping and loading

Notes: Work seems to be well under way here. However, the ability to save and restore the address store (across server restarts) has been requested.

Lookup logic in non-DNSSEC case
The basic requirements and API design (and the writing of the associated test cases) form part of the current sprint. Other tasks include:

  • Tracking Queries (to the server) and Fetches (data from other servers)
    • Loop detection and avoidance
  • Fetches to other servers
    • Randomisation of query ID => random number generator
    • Port randomisation

Notes: the core of the resolver - actually receive a query and follow the chain of referrals until an answer is received. There are a number of issues here, not least those associated with the Kaminsky attack a couple of years ago.

Lookup logic in DNSSEC case

  • Design
  • Loading configured trust anchors
  • Extension to lookup design
    • Path to look up chain of trust

Notes: extension of the lookup logic to the DNSSEC case - the determination that a zone is signed and that RRSIG should be expected, following the chain of trust.

DNSSEC Validation
Some basic function required of a validating resolver:

  • Checking that RRSIG matches RRset + DNSKEY
    • Algorithm SHA1, SHA2, GOST
  • Extend to checking that at least one RRSIG matches RRset + one key in DNSKEY RRset
    • Check that RRset has a RRSIG for all algorithms represented in DNSKEY RRset
  • Checking that DS record matches DNSKEY record
    • Checking that at least one DS in parent zone matches at least one DNSKEY in child zone
  • Validation using HSM
    • Key access
    • Signature checking
  • Authenticated denial of existence
    • NSEC/NSEC3 validation

Notes: the basic tasks associated with DNSSEC validation.


  • Demultiplexer: requirements and design: the Demux component is the component that matches incoming responses with outgoing queries. This particular task is concerned with drawing up the requirements and outline design for such a component.
  • Demultiplexer: implementation


Update auth/benchmarks/query_bench
So that it can use in memory data source. This should be pretty easy, and will soon be necessary.


Asynchronous I/O
Although there are some diagrams in the "doc" subdirectory of src/lib/asiolink, the code is quite complicated. A class diagram plus an explanation of how the classes are used in a recursive DNS server would be helpful.


Last modified 7 years ago Last modified on Jan 17, 2011, 1:51:35 PM