Review of Last Sprint

Tasks accomplished in previous sprint

11 tasks
33 estimate points

Remaining tasks

9 tasks
39 estimate points (but 12 are almost completed)

Stephen: Estimate we finish this work in the next sprint.

Stephen: Any comments on the past 2 weeks? General organization, tickets that needed to be created...?

Jelte: Identified several local places for refactor... we knew we would need those anyway.

Stephen: Refactoring should be right at beginning of Year 3.

Shane: Where are these captured?

Stephen: At moment we have tickets that say "refactor". But I will update one of the tickets with all of the things we need doing.

Jelte: grep code for TODO.

Stephen: I do that as well. We should make that something we do tackle... grep for TODO and do them... we should deliver code without any TODOs at all.

Shane: Put those in the Trac site?

Stephen: They vary... 'we should refactor' or 'the code should do this as well'.

Shane: Some TODO are general observations by the coder... they may *never* be done.

Stephen: Those should go in maintenance documentation. Comments in the file documents the code, and should tell you things like that. The algorithms, how everything is put together, and so on.

Stephen: The question is "if we're shipping code and there is a TODO then why haven't we done the TODO?"

Stephen: So we're happy how things have gone? [ silence ]

Tasks Outstanding

Most completed.

Tasks for Next Sprint

Jeremy: Can we suggest missing tasks? TCP on the resolver crashes it... that seems pretty important.

Stephen: Yes if it crashes the resolver then we need to fix it.

Jeremy: It's not closing the sockets.

Stephen: Sounds easy to fix. Please add to next sprint!

Stephen: If we add features people will not be impressed, but they will be unhappy if it doesn't work! So we should focus on testing. What do people think?

Jeremy: Yes.

Larissa: Yes.

Likun: Yes.

Jelte: Testing and usability, I would think. We moved one ticket out of immediate... which is using defaults from the spec file instead of hard-coded defaults. So, usability as working with configuration as you would expect it to. (Ticket #518)

Stephen: How important is that?

Larissa: Seems important, but I don't know if there are worse issues we should be testing for first.

Shane: Can we talk about the testing strategy before we decide?

Testing Strategy

Stephen: Some functional testing necessary.

Stephen: Maybe in the lab, we set up our own hierarchy, zones, and so on. Then we do a series of tests, and check things are working as expected. Chasing CNAMEs, detecting loops, etc, etc. Jeremy?

Jeremy: With the exact zones and setup I can check BIND 9 and record the packets with nmsg. (I did this with authsrv.) Then reimplement it with BIND 10 and compare the differences. It would have to be done so we don't have random name servers - so one name server at a time for our initial tests.

Jeremy: One thing we need to make it easy to set up... I can use virtual IPs... but we need an easy way to tell BIND 10 to run on the different interfaces and have different configurations. As it is now I would have to have a different install of BIND 10 to test each one, and I shouldn't have to do that.

Jelte: Just because the configuration file is hardcoded?

Jeremy: And not all components respect telling where the msgq is for example.

Stephen: But just testing the resolver... shouldn't we use BIND 9 servers? Since the BIND 10 server is still being tested...

Jelte: I would also like to see an in-tree way to do this.

Shane: Sort of like the test stuff defined on list. I don't think we'll have time for this, and we want to maximize bang for buck.

Stephen: When we run it, what queries are we going to send where? We need someone to design the tests, we need the zone files written and put in the Git repository. I guess we have an automatic script to do testing? What does nmsg do?

Jeremy: It's a packet capture tool. It has different message parsers, and one is for DNS queries, so it can export information just about DNS. A little friendlier than using tcpdump for DNS.

Stephen: And it does things like blocking off QID as well?

Jeremy: The strength of comparing BIND 9 vs. BIND 10 is that we can capture all packets to make sure it is what we expect. Versus just having the final answer, which would be an easier testin.

Stephen: But if you just do final answers, you avoid timing problems.

Jeremy: Yes, but if we're doing 20 lookups when you should do 2 you would never know.

Stephen: Probably a need for both in the final product. But what is achievable between now and mid-March.

Shane: Keep in mind that we also need time to fix the bugs we find. And there will be bugs!

Jeremy: Also we need to do ad-hoc testing where we actually use it.

Stephen: You can only test what you know.

Stephen: Next 2 weeks, concentrate on getting software complete. If people run out of things to do there are a whole bunch of bugs in the backlog (not R-Team, ones not attached to any milestones).

Stephen: Then 3rd of March to 17th of March we'll do concentrated testing. How does that sound to people?

Jeremy: So... don't test now?

Stephen: We can prepare for testing, but we have tasks we have to do to say that the resolver is finished.

Jeremy: Simple testing... have a list of 5000 queries known to work, and just do those queries against BIND 9 and BIND 10.

Jelte: I've been querying WWW.MICROSOFT.COM because it has a long CNAME chain.

Stephen: In the wild things can change

Stephen: Jeremy, how is the documentation?

Jeremy: There's a lot that I'm working on. bindctl behavior, the new resolver configuration, also we will have the new authoritative server documentation. There are a lot of little things.

Stephen: Is there anything the resolver team can do towards helping you prepare the release of the resolver? You can put these on the task list...

Jeremy: I think there is... I'll think about that though. For example, proofreading.

Stephen: Current stuff on the milestone - tackle that. Have to create some testing tasks. Jeremy, maybe could you take that on?

Jeremy: Yes, and I think we need to brainstorm a little bit on 10 scenarios we want the resolver to test.

Shane: Do that on this call?

Jeremy: Yes I think we can.

Stephen: Testing tasks, release tasks, and if you have time then you can look at the bug list of things not attached to a milestone.

Call switches to ResolverTestingBrainstorming ...

Last modified 7 years ago Last modified on Feb 22, 2011, 3:49:04 PM