These are notes from the in-person meeting at Redwood City. Some of this is unorganized or without context. Please link to other documents or wiki pages or provide more details if needed.

Monday, October 26, 2009

Week's goals:

  • overall project coordination
  • agreement on status and what we are working on for next five months
  • meeting face to face
  • coding
  • runnable server goal by end of this week

To discuss this week

(list not in order)

  • test framework
  • boost libraries
  • user stories (jinmei), how to use, extreme programming (shane)
  • tweakable knobs (mgraff)
  • python programming, issues, available APIs
  • c++ issues, available APIs
  • licensing (Tuesday)
  • style guide
  • how to use message queue (mgraff) (Monday)
  • documentation, doxygen
  • documentation before coding? documentation while coding
  • testing suite, functional, unit
  • package structure and source layout
  • modularity
  • user interface discussions
  • performance goals (jreed added this later)
  • python interfaces, c++ integration
  • convention over configuration (mgraff)
  • overall architecture and modules (Tues)
  • buildbot, code build reports (jreed added later)
  • brainstorming about high performance data source architecture (jinmei added later)
  • bootstrapping
  • configuration store formats
  • clustering
  • repository organization, e.g., where .cc, .h, .py, etc should go,
  • build tool: is automake a good one?

shane discuss few presentations he gave

(let's link to presentations in here. Action: shane make presentations available)

  • DENIC registrars meeting, technical registration
  • timeline:
    • year 1 - authoritative (contracted)
    • year 2 - recursive
    • year 3 - fully functional
    • year 4 - BIND 9 dropin replacement, administrator experience, bug compatibility?
    • year 5 - others, clustering, load balancing, web interfaces, GUI tray?

  • must support DNSSEC from day 1
  • 5-10% performance increase
  • resolver library independent of openssl
  • low power, embedded
  • validating resolver library
  • data sources be able to suck in djbdns format (maybe cdb too)
  • out of memory data store, faster load time (serve instantly)
  • SQL changes can be fully dynamic without reload (add zones for example)
  • "free" SQL model is like DLZ, outside may update
  • BIND 10 to do both ways: captive (only BIND 10 may update) and "free" SQL
  • all our current sponsors are TLDs (we need more diversity of sponsors)
  • robustness error handling

(link to presentation, link to blog article)

  • from security point of view
  • ISC is great for code reviews - always done before committed to HEAD
  • with exceptions: readability, error handling is separated
  • if don't know results, exit program
  • some aren't fans of exceptions
  • interface information gathering, 2 sessions with shane and larissas

(link to presentation, link to email message about it)

  • juniper in raw xml, has interface to that
  • config modes, static mode (BIND 9 style)
  • dynamic configs
  • clone configurations
  • meta zones
  • what it sends is data
  • command channel will be between machines, maybe SSL certs
  • what do our sponsors expect for April deliverable
  • dry run of release process for first milestone (January week 1)
  • shane's goal is to run it personally
  • no one is working on these 1st year goals yet:
    • logging
    • datasrc API (was on mgraff's list)
    • TSIG
    • ACL
    • zone transfers (axfr, ixfr, notify)
      • benchmark zone transfers
    • monitoring


round table of components / status

  • c-channel (mgraff)
  • Action: jreed write how to use C-channel API doc
  • working with jelte for data types
  • need to discuss network layer
    • boost asio framework?
  • c-channel data format compatible with BIND 9 rndcd, there are perl and java implementations
  • dns message api (jinmei)
  • using autoconf/automake
  • need feedback from user's point of view
  • 64 KB buffer, open issues
  • need to be able to parse DNS in python, linkable in python
  • data element (jelte)
  • config manager later
  • abstraction for message API
  • c++ dynamic cast
  • boost.any array of any type or typeany
  • bigtool (cnnic)
  • showed interface with table completion and builtin help (nice!)
  • comma is optional separator (blank space too)
  • should learn options / use from config manager
  • order doesn't matter
  • python 2 problem
    • ask customers is python 3 okay?
    • python recommends python 2.6?
    • any python coding document for 2 to 3 compatibility?
  • start server via bigtool?
  • "cannot connect to messaging, nobody to talk to"
  • configure a zone, autostart auth server
  • say start 3 auth servers
  • start zone transfer component, stop component
  • everything connect via c-channel (later we discusses a config or userfunctool as an intermediary)
  • turn on or off zone updates for a zone
  • periodic beacon (mgraff)
  • changes channel, spew out changes (jelte)
  • roll back or 2 phase commit
  • pipes?
  • be able to ask command channel: who is on? not now, need to know to meet up

figure out this week's goals

  • UDP DNS query handler
  • boss of BIND10
  • c-channel
  • BigTool?
  • stats
  • Parking Lot (auth server) (authors.bind for those here this week :)
  • cfg parser, at least remember what is added and removed as long as is running (or save database with python pickle)
  • get components talking together
  • agile programming, always get feedback from customer, quick feedback, willing to refactor
  • stats some modules
  • periodically broadcast: I am running and also accept requests
  • command channel won't have message per query
  • data source - runtime loadable modules
  • in memory data source sharing, separate process won't use c-channel

c-channel introduction (mgraff)

  • msgq (to be renamed) daemon
  • components (like auth) connect over TCP (to change to Unix domain sockets)
  • auth subscribed to (for example) zoneadd.* and zoneupdate.*
  • channel and instances
  • add and remove instances at will
  • python dict containing information
  • (like if serial changes for update change)
  • when connect you get a unique id
  • like dbus, shared message bus
  • channel.instance, later cluster name could be part of the channel
  • this is a transport, not a full scale protocol
  • python
  • x = ISC::CC::Session()
  • x.localname
  • x.group_subscribe("groupname", "instance")
  • x.group_sendmsg(message..., groupname, instance)
  • when sending, instance is required
  • x.group_recvmsg(False)
    • false is blocking, true (default) is non-blocking - change this API because of double-negative? maybe some macro/constant for this?
  • we looked at
  • tcp.socket
  • could make it use string and send raw XML
  • doctest, unit testing for each function, builtin to python, it is regression testing where inline docs must match code
  • Action: ???? CNNIC to make a python coding improvement change here (what was this????)
  • ping - who is here?
  • subscribe promiscuously (not in this implementation) - debugging, secret subscriber, get *.*
  • examples:
    • listen (group_subscribe): command (group name is command), command.hostname
    • don't have to be subscribed to listen
    • auth - listen AuthConfig?.* (like "name" : "flame" in file dict format)
    • listen ZoneNotify?.specificlabel
    • xfer daemon: group_sendmesg: ZoneNotify?.specificlabek { "serial" : "...." }
  • cost per subscription? what if 200,000 zones?
  • names are arbitrary, specified same by users of it
  • Statistics.QueryStats?
  • idea: name to number registration or handle? as a hint, client keeps map
  • another API layer for outside users?
  • security / who is allowed to communicate? subscribe?
    • no authentication now
    • ideas: have a ket that identifies
    • session key could be in a static file
  • later: msgq daemon have config to know who can send to, subscribe to
  • Action: authentication tokens (mgraff)
  • Action: use Unix domain sockets (mgraff)
  • if msgq daemon dies, will lose subscription details
    • boss daemon restart it, client/users will notice and resubscribe
    • auth server will still be serving
    • maybe if dies, everything should be restarted?
    • boss server provide unique tokens to everyone


  • netconf is like bigtool, maybe we will have a configuration wrapper for it?
  • our front end should be common for other DNS servers?
  • like nds or BIND 9 talk to our c-channel daemon
  • or just have their config plug into our userconfig control
  • userconfigcontrol is a gateweay to make end user not send junk to out control channel?
  • bigtool could be netconf?
  • then our bigtool could be other stuff??
  • control DNS over DNS (jelte?)
  • whatever it is, well documented, publish so others can (and will) use it
  • so maybe just the userconfigcontrol or c-channel groupnames.instances
  • need command&control daemon simply to apply ACLs and permissions of who can control and configure, such as with webgui or BigTool?
  • boss can't, as its job is to be simple (process dies, signal handler, restart)
  • maybe use config daemon?
  • boss uses pipes to communicate (stdin/stdout)??
  • xml from BigTool??
  • at first, raw tcp for BigTool?, later SSL?
  • if mesgq or boss dies, components kill themselves?
  • or if boss restarts if it can re-attach to msgq, then recover and just run
  • boss can provide a list of running processes
  • error if attempt to do some steps twice (what was context of this?)
  • typos happen

Tuesday, October 27, 2009

process model

  • discussed process model diagram
  • data sources will use different communication channel
  • boss process spawn communication daemon (msgq) so all can communicate
  • and boss start config daemon so we know what to do
  • multiple processes
  • maybe boss, config, communication separate modules in same process? for now separate, but maybe later
  • goals: easy to understand, easy to debug
  • boss process gets config daemon as well
  • c-channel knows nothing about DNS
  • config daemon is abstract and agnostic, any type of config information (no understanding of DNS)
  • config store could be sqlite (as an idea)
  • to change config, use a tool, like BigTool?
  • c+c acl rules
  • how easy to continue to support BIND 9 configuration?
  • 2 part process - have BIND 9 could be done via boss (talk about later as that is year 3)
  • who is allowed to make changes? what changes? who is allowed to communicate?
  • c+c rename to "command authorization daemon"
  • logging not as a process
  • not via command channel
  • everything handle logging on their own
  • got to BigTool? "show me last 20 logs for"
    • ring buffer?
  • authorization may be done via config daemon or by separate process "command auth daemon"
  • auth servers, probably one per core
  • dispatch daemon for network listener, ring buffer?
  • not plan for now
  • build both ways and compare?
  • auth server could be dispachter ("I don't know answer, has recursive bit set, so pass it on"): auth, cache, pass off to recursive
  • or 3 processes: auth, recursive or both (combined)
  • auth server have functionality of recursive and DDNS?
    • may not have entire zone or all zones
    • for very large zones you have to split up
    • multiple servers with overlap
    • everyone knows where everything is, just doesn't have all data
  • try to get bottleneck to be network
  • hand off I?O descriptor to other process, TCP stream
  • one xfer process to handle multiple transfers
  • rate limit to limit before run out of resources
  • no separate process for data sources unelss some abstraction to SQL server
  • different data sources
  • config store could be anything
  • bigtool have a "nsupdate" personality: some commands talk to config daemon and other commands talk to DDNS Update daemon
  • bigtool asks what commands are acceptable? a schema, version agnostic
  • all daemons and tools on connect: "here is my configuration blob"
  • if we have to change, we will change
  • benchmark threading, multiprocesses
  • what happens when BIND 10 start with empty config store?
  • need minimal configuration, default password? only accept on localhost
  • tool "init"
  • or tool to create shared secret
  • long term design goal -- multiple machines, cluster sharing config sources and data source
  • need to define "cluster"
  • coding more important now


  • we need statement on licensing so we can point to it
  • Action: jreed to write a proposal: state what our license is, what is compatible with our goals and license, and binary/end result goals; and list what we can use


Wednesday, October 28, 2009

parkinglot HOWTO

  • Check out a fresh copy of the code:
$ svn co svn+ssh://
  • Build the various daemons:
$ cd bind10/branches/f2f200910
$ autoreconf
$ ./configure
$ cd src/bin/
$ cd msgq
$ make
$ cd ..
$ cd parkinglot
$ make
$ cd ..
  • Put the daemons in the boss directory:
$ cd bind10
$ ln -s ../msgq/msgq .
$ ln -s ../parkinglot/parkinglot .
$ ln -s ../bind-cfgd/bind-cfgd .
  • Start the boss:
$ ./bind10 --verbose

That's it!

Last modified 9 years ago Last modified on Nov 4, 2009, 5:20:52 PM