This is only an overview. Documents describing more details of stuff mentioned here are usually available. There's API reference you can generate from the code as well. But this is meant to help get oriented and know where to look.
What we mean by modularity is it should be possible to take part of the system and replace it, without destroying the rest. Or, even, just get rid of the parts you do not need. And, it should be possible without writing code, compiling and complicated stuff like that, if it isn't required for some stronger reason (for example performance).
New functionality will be required, even such we do not think of. It should be possible to add it without rewriting the existing parts, and it must not need going trough 42 different places in the code to do it. So either hook should exist, some place to plug it in, or there must be at last a single place where the hook does not exist yet, but can be added.
Bug in one place should not bring the whole system down. If some part has a problem, it should handle it somehow (but not at the cost of wrong behaviour), or, if not possible, only the smallest possible part should crash. In that case, the system should try to recover (for example by trying to restart the crashed parts).
It must follow protocol specifications and do the Right Thing. Internally, each component should have well-defined behaviour that is followed and is tested.
It is preferred to follow the old „one task, one process“ philosophy, if possible. It means, if the problem is too complicated, it should be split into smaller problems, each running in its own process. It separates the problems, which helps modularity and extensibility and separates crashes, which helps reliability.
It isn't possible in all cases, for example for performance reasons. In such cases, some kind of plug-ins (like a shared library that is dlopened at runtime depending on configuration) would be used to have at last some possibility to extend current behaviour.
Testing & reviewing
We have some rules regarding new code. All code that goes to the main branch (trunk) must be reviewed by a different programmer. Something similar would happen to patches received from outside. The main branch must compile and run all the time.
All code (at last production one) should have automated tests, preferably written before the code.
Python is the preferred language, for its easy to use and easy to read nature. If python isn't the path to go (for example for performance reasons), C++ is used.
Here you can find how we have the processes and components organized, what each of them does and how. You can find a picture in the DesignDiagrams page.
This doesn't do anything „visible“, it is just the part that is needed by the rest.
Boss Of Bind (BOB)
Does what the name says. It just sits there and makes sure everyone is alive and working happily. So it starts all the processes, restarts them if needed and shuts them down at the end.
Message bus (msgq)
Because there are many processes, they need to talk to each other. So all of them connect to this bus and it routes the messages where it is needed.
This one does not yet exist, but the plan is, we want to have as little privileged code running as possible. And because we need privileges to create sockets on port <1024, we would have this one running and doing only that. It would be slave process of boss, talking only to it. Boss, in turn, would talk with the process needing the socket and send it to it.
Configuration & interface
Each component has its own configuration. Other components may look into it, but shouldn't change it. Each component provides a file which describes its configuration options.
If a request to change configuration comes, it is sent to the component first and it can check, change it or even reject it.
Configuration manager (cfgmgr)
Its task is to store configuration (with various possible storage backends) and provide it to other parts of system.
Command control (cmdctl)
It is the interface to outer world and gatekeeper of the system. It provides commands to manipulate the running system (over a REST-ful API protocol) and authenticates who can connect. If the command is authenticated, it lets it trough to the message bus.
There's currently one frontend (command-line client), but more are expected in future.
It gathers statistics send by other parts of the system, keeps them and provides to the user, either trough cmdctl or by itself in other ways (like a HTTP server).
The part that does the actual work. It'll probably merge somehow with recursive resolver and cache in future, but these plans are not really concrete yet.
There can be multiple of them. It listens on the port 53 (or other port) and answers the „usual“ queries. If it finds a different type (like a request to send zone data through zone transfer), it redirects it to corresponding daemon.
It is not a separate process, more like a library with API. It stores the zone data. There's one (sqlite) now, but more different options are expected in future, so users may choose by their needs or provide their own.
Single-shot process to load common zone file into the data source. It is run manually.
Transfer out (xfrout)
If a request to send zone data arrives, auth daemon handles the request here (with opened TCP connection, if applicable) and this one sends the data. The data are taken from data source. It sends notify DNS messages as well.
Transfer in (xfrin)
This one can contact remote server, ask for zone data, receive them and store in data source. It doesn't decide by itself when this happens, it must be instructed to do so.
It receives notifies (from auth daemon), timeouts zones and runs xfrin if needed.
Most of the code is under the src directory. The lib directory contains code common to many parts, while bin contains what the processes do not share.
Many of the directories contain a „tests“ subdirectory. Some unittest might be found there.
Some of the code can be found in ext (it is third-party code that is imported to the project) and tools (more development tools than actual code).