wiki:HADesign

High Availability in Kea 1.4.0 - Design

Supported Topologies

This section describes HA topologies which are to be supported by Kea running in different HA configurations. The most common topologies involve two servers running in load balancing or hot-standby mode and their variations. The servers utilize the control channel to communicate with each other and don't need to use SQL database replication to synchronize the lease database. Therefore, this solution works for any lease database backend, although Memfile will have the best performance as it doesn't require communication with the external database. The performance impact is important because the Kea servers send lease updates synchronously.

It is also possible to configure Kea to use database replication, in which case the Kea servers do not perform leases synchronization themselves. However, the servers still send heartbeat messages between each other to detect failures of their peers, i.e. failures of the DHCP server process, rather than a database failure.

Some of the topologies presented below include more than two DHCP servers. The extra servers are called backup and they receive lease updates from the load balancing or/and hot-standby servers. The primary reason for using a backup server is to have yet another copy of the lease database. It is also possible to use the backup server to respond to the DHCP traffic if other servers have died, but it requires manual intervention of the administrator via control channel.

Load Balancing

In this configuration, there are two DHCP servers running at the same time and both servicing DHCP requests from the clients. Both servers use the same algorithm for choosing the server for processing a DHCP request. The algorithm is applied on one of the client identifiers, e.g. MAC address, circuit-id, and thus it gives a stable result across many requests sent from the same client. The server which discovers that is not designated to process the packet, drops it if the other server is operational. The server which is not designated to process the packet will process the packet only if detects that its partner is offline.

The picture above shows three clients, two of which are served by server 1. The client 3 is served by server2. Both servers exchange lease updates synchronously, so as each server has a full information about allocated leases by the entire system and can start serving all clients when its partner crashes.

This topology provides high availability for the DHCP service within a site where it is deployed and protects against crashes of one of the servers by directing the entire DHCP load onto the surviving server. It provides means for automated detection of failures of the partner and synchronizing the lease databases when the server comes back online after the failure.

The following is an example configuration for High Availability in the load balancing case.

{
    "high-availability":  [
        {
            "this-server-name": "server1",
            "mode": "load-balancing",
            "heartbeat-interval": 10,
            "max-response-delay": 60,
            "max-ack-delay": 10,
            "max-unacked-messages": 10,
            "peers": [
                {
                    "name": "server1",
                    "url": "http://server1.example.org:8080/"
                },
                {
                    "name": "server2",
                    "url": "http://server2.example.org:8080/"
                }
            ]
        }
    ]
}

The structure above is included in configurations of all HA peers. This means that every HA peer knows configurations of all of the peers, including itself. The this-server-name parameter specifies a name of the particular server instance, so as the instance can distinguish between its own HA configuration and other peers' configurations. It is possible to omit the url parameter of the server to which the given configuration refers. If this parameter is specified, it is ignored for the given server instance.

The mode parameter instructs both servers to operate in the load balancing mode.

The next set of parameters specify values according to which the server detects if its partner is offline:

  • heartbeat-interval - specifies an interval between the last response from the partner over the control channel, and the next heartbeat message to be sent to the parter,
  • max-response-delay - specifies a maximum time between two successful responses from the partner over the control channel. At this point, the server assumes that the communication between the servers is interrupted but it continues to examine DHCP traffic directed to the partner to see if it responds to the queries from clients.
  • max-ack-delay - specifies maximum time for a client trying to communicate with DHCP server to complete the transaction. This is verified by checking the value of the secs field in DHCPv4 and elapsed time option in DHCPv6,
  • max-unacked-messages - specifies the number of consecutive client's DHCP messages not responded by the partner within a time of max-ack-delay

The server sends heartbeat messages according to the specified interval. If the partner doesn't respond to the commands over the control channel (including periodic heartbeat messages) for a longer period of time than max-response-delay, the server starts examining the traffic directed to the partner by checking the values in secs field (DHCPv4) or elapsed time option (DHCPv6) and comparing them with the max-ack-delay. If this value is exceeded for a message, the message is considered unanswered. If more than max-unacked-messages is detected, the server assumes that the partner is offline. In this case, the server can either automatically start processing packets directed to the partner, or signal the partner's failure to the monitoring system.

Hot Standby

This is another configuration involving a pair of servers. One of the servers is designated as primary. Another one is a hot-standby server. The primary server is processing the entire DHCP traffic. It synchronously sends lease updates to the standby server. The standby server doesn't process any DHCP traffic as long as the primary server is online. The standby server receives lease updates and heartbeat messages from the primary and responds to these commands. If the primary server finds that the standby doesn't respond to the control commands it signals that to the monitoring system.

The standby server detects the failure of the primary in the same way as for the load balancing case. When the standby server detects that the primary is offline, it takes over the entire DHCP traffic directed to the system.

The following is the sample configuration for the hot standby mode:

{
    "high-availability":  [
        {
            "this-server-name": "server1",
            "mode": "hot-standby",
            "heartbeat-interval": 10,
            "max-response-delay": 60,
            "max-ack-delay": 10,
            "max-unacked-messages": 10,
            "peers": [
                {
                    "name": "server1",
                    "url": "http://server1.example.org:8080/",
                    "role": "primary"
                },
                {
                    "name": "server2",
                    "url": "http://server2.example.org:8080/",
                    "role": "standby"
                }
            ]
        }
    ]
}

In this configuration, an additional parameter role has to be specified to indicate which server is a primary and which one is standby. It is not allowed to specify the same values for both servers, e.g. two primary servers or two standby servers. Optionally, a backup role can be used instead of the standby. The difference is that the backup server doesn't automatically detect failures of the primary. The backup server can take over the DHCP service in case of the primary server's failure, but this needs to be triggered manually.

Also, when the primary server comes back online after a failure, it doesn't automatically synchronize a database with the backup server, nor it starts the DHCP service automatically. The primary has to be manually instructed to synchronize the database with the specified backup server and then DHCP service enabled.

Multi-site Configuration

In the classic failover case, it is possible to configure two DHCP servers in the different sites such that the server1 is a primary server for the site 1 and a standby server for site 2. Similarly, the server2 is a primary server for site 2 and a standby server for site 1. In Kea 1.4.0 it won't be possible to configure a server to participate in multiple relationships, i.e. the server can't be a primary and a standby server at the same time.

To achieve the equivalent setup with Kea HA, it will be necessary to deploy two DHCP servers per site, one being primary server for this site and the second one being a standby server for another site.

Multiple Servers

It is possible to configure more than two servers to perform HA as shown on the picture below.

In this topology, the server 1 and server 2 operate in the hot standby mode. There is third server which is configured as backup, which means that it receives all updates from the server responding to DHCP queries, but it doesn't react to the other servers' failures. The system must not be configured with more than one standby server because the standby servers would have no means to detect which of them should react to the failure. The backup server can be activated manually by the system administrator if required, e.g. when both server 1 and server 2 crash.

It must be noted that introducing additional backup servers will have a negative impact on the performance of the active server, because it will have to send synchronous lease updates to multiple locations.

Another variation of this setup is when there is no standby server, but only backup servers. Any of those servers can be activated manually in case of active server's failure.

Finally, the backup servers can be added to the load balancing configuration. In this case, both load balancing servers will be sending lease updates to the backup servers apart from sending lease updates to each other.

Central Lease Database

All use cases presented so far can be used in conjunction with any supported lease database backend. If the external database is in use, e.g. MySQL, it is possible to use database replication capabilities to provide high availability of the DHCP service.

In this scenario, both DHCP servers talk to the same database over the virtual IP address. All updates to the database are replicated to the slave database instance. The DHCP servers can be either configured to operate in the load balancing, hot standby or active-backup mode. When the database crashes, the MySQL requests will be directed to the slave database instance. The failure of any of the Kea servers is handled as usual, but the database synchronization is not performed by Kea.

When one of the servers (primary, standby, or load balancing) comes back online after a failure, it follows similar procedure to enable the DHCP service as described in previous sections, except that it doesn't synchronize the lease database with the peers. It transitions directly to the ready state (bypassing syncing state).

Subnets and Pools Configuration for HA

The following subnet configuration includes two pools. One of the pools is only allowed for packets classified as belonging to "server1", another one is only allowed for packets belonging to "server2". When the packet is received by the servers, the load balancing algorithm assigns the packet to one of those classes if the class is selected based on the output of the algorithm and it matches the value specified in this-server-name. If the candidate class doesn't match the local server name the class is not assigned to the packet. The packet which is not assigned to any of the "server1" or "server2" classes, will not be processed by the server, i.e. the server will not be able to locate a pool for this packet because all specified pools require a class name.

"subnet4": [
    {
        "subnet": "192.0.2.0/24",
        "pools": [
            {
                "pool": "192.0.2.1 - 192.0.2.50",
                "client-class": "server1"
            },
            {
                "pool": "192.0.2.51 - 192.0.2.100",
                "client-class": "server2"
            }
        ]
    }
]

Similarly, it is possible to split subnets rather than pools to different HA peers. In such case, the client classes should be specified in the subnet scope.

"subnet4": [
    {
        "subnet": "192.0.2.0/24",
        "client-class": "server1",
        "pools": [
            {
                "pool": "192.0.2.1 - 192.0.2.100",
            }
         ]
    },
    {
        "subnet": "10.1.1.0/24",
        "client-class": "server2",
        "pools": [
            {
                "pool": "10.1.1.1 - 10.1.1.100"
            }
         ]
    }
]

Finally, it is also possible to split shared networks between different HA peers.

The use of classes is only required in case of load balancing setup. In the hot-standby or backup modes there is no need to select pools (or subnets) by classes because there is always exactly one server doing DHCP. This server can use all available pools.

Load Balancing

Load balancing is performed in a HA hooks library, in the pkt4_receive or pkt6_receive callout. Each server is applying hash on a MAC address as described in RFC3074 to determine which of the servers should service the client's request. When two peers are involved, we're going to assume that odd hash bucket values will be serviced by first server ("server1" in the above example) and even hash bucket values will be serviced by the second server ("server2"). If the server finds that it is responsible for a packet, it will add a client class with the server name to the packet. The server will assign addresses from the pools (or subnets) dedicated to this server.

If the server determines that the given packet should be processed by its peer it must check whether the peer is online or offline. This information should be available in the hooks library and it is set according to the algorithm described in the "Failure Detection" section. If the peer is deemed to be offline the server takes responsibility for processing the current packet. In this case, it uses peer's name (e.g. "server2") as a client class name. That way, the server will process the packet normally belonging to its peer and will use the resources (pools, subnets) which the peer is normally responsible for.

When the server finds that the packet should be processed by a peer and the peer appears to be functional, the callout returns DROP status code. The server will subsequently drop this packet.

Communication Between Peers

Kea 1.3.0 exposes RESTful control interface via Control Agent, a separate process forwarding control commands to respective Kea services. The communication between Control Agent and DHCP servers is performed using unix domain sockets. The RESTful interface seems to be a good fit for communication between failover peers, because it already implements certain commands required for HA, i.e. lease queries. It also allows for sending and receiving long chunks of data over TCP, which is required for bulk update of leases between two peers during recovery after a failure. The following subsections describe extensions required for the Kea code to facilitate communication between HA peers.

Extensions to RESTful API

In Kea 1.3.0 a new HTTP connection is opened for each control comand. When the server detects the end of the command or a timeout occurs, the server closes the connection. This works fine for a typical case when RESTful API is merely used for updating server configuration and the commands are sent rather rarely. In the HA case, we're planning to synchronously notify the peers about lease allocations. In the heavy load, there might be hundreds or thousands of lease updates per second sent between the peers. In such case, establishing a new TCP connection for each lease update is not an option. Therefore, the RESTful API has to be extended to support HTTP 1.1 persistent connections, i.e. connections are by default persistent and they are only closed when specifically signalled by a client or by a server. The server may choose to close the connection after a certain period of client's inactivity. In such case, the connection can be re-established when neccessary, e.g. when the lease update has to be delivered to a HA peer or when a heartbeat message needs to be sent.

See w3.org for the details of persistent connections in HTTP 1.1. There are no plans to support HTTP 1.0 "keep-alive" connections at this point. The HA peers will always use HTTP 1.1 for communication.

Client Side Communication

The libkea-http library contains generic classes and functions managing server side HTTP connections, HTTP requests parsing, generating responses etc. The RESTful API implementation is created using this library.

The communication between the HA peers requires client side HTTP support as well. Kea 1.3.0 does provide a kea-shell application for sending control commands over the RESTful API, but it is not going to be practical to use this application for lease updates. One reason for this is that we want to have control over the established connections. Another reason is to avoid receiving long responses (such as bulk lease queries) over the shell application. Thus, we plan to extend the libkea-http library with new classes and functions to establish client side connection over the RESTful API, send HTTP 1.1 requests, receive and parse responses. This library already conatins useful C++ classes that may be extended for such purpose. The client side connections can be implemented using libkea-asiolink library.

The lease update requests will be generated in a hooks library (HA hooks library) attached to the DHCP server process. This means that each DHCP server will act as a "controlling client" to its peers. The peers will receive lease updates over their instances of the Control Agent (CA).

HA Communication Separation

The control channel requires Kea Control Agent process to be running, as it exposes HTTP and forwards commands to the respective Kea servers. In a simple scenario, it should be possible to use the same instance of the Control Agent for HA and for the administrative commands. However, when the separation of the HA and administration is required, it should be possible to use two instances of the Control Agent. In this case, both will need to be bound to the same unix domain socket. There is a potential problem with multiplexing commands from different CA instances over the same unix domain socket. However, the communication channel between CA and DHCP servers is fully synchronous, so it is not really possible for the DHCP server to receive partial commands. That solves the problem with multiplexing. The only question is whether we're going to allow asynchronous communication between CA and DHCP servers in the future. If we do, we will need a separate unix domain socket for different CA instances and corresponding updates to the configuration syntax.

New Hooks Points

There are many points in the code where lease creation, update and deletion routines can be used in the code, including allocation engine, server code and other hooks libraries. Not only are lease updates triggered by the synchronous packet processing but also by asynchronous events, such as lease reclamation. To minimize the risk that the lease updates aren't generated when they supposed to be, it seems best to create additional hook points triggered in the lease managers directly, rather than using existing hook points. Existing hook points may not be hit when the server behavior is modified by additional hooks libraries.

This design proposes the following new hook points to be implemented, which correpond to the lease manipulation routines implemented for each lease database backend:

  • lease4_add - triggered when LeaseMgr::addLease(Lease4Ptr)is called.
  • lease4_update - triggered when LeaseMgr::updateLease4(Lease4Ptr) is called.
  • lease6_add - triggered when LeaseMgr::addLease(Lease6Ptr) is called.
  • lease6_update - triggered when LeaseMgr::updateLease6(Lease6Ptr) is called.
  • lease_delete - triggered when LeaseMgr::deleteLease(IOAddress) is called.

The callouts for the specified hook points will only be called after successful operation on the local lease database. In order to avoid extending each backend with invocations to the new hook points, a new class will be created HookedLeaseMgr, which will provide a generic implementation for these hook points. The actual lease backends will derive from this class and make invocations to the respective functions of this class where appropriate.

The new hooks library, providing HA for Kea, will implement callouts for these hook points to send lease updates to the HA peers using RESTful API (see above).

Heartbeat

The Kea servers working in HA configuration should periodicaly check if their peers are online. One of the indicators is that peer responds to the commands sent over the control channel. Under the heavy DHCP load, the peers constantly communicate with each other to provide lease updates. However, there are times when the load may be significantly decreased, e.g. night time when many devices are down, or simply long lease lifetimes may cause long periods of time when no client is renewning a lease.

To maintain the information about peer's availability the Kea server should periodically send "heartbeat signal" to the peer. If the communication fails (e.g. timeout or error) it is a first indication that the peer may be down. A response to the heartbeat command will include a server state. Diferrentiation between the server states is important when the server comes back online after the failure. Its running peer determines whether it should continue to serving leases for the other server, or it should transition back to the load balancing mode. The following states are defined and can be included in the response to the heartbeat command:

  • syncing - the server is synchronizing data after down time. Its running peer should continue serving leases for this server as it is not yet ready to take over.
  • ready - the server completed synchronization of the data after down time and is ready to transition to the load balancing (normal) state.
  • load balancing - the server is serving its own clients and its peer should serve its clients.
  • partner down - the server has discovered that its peer is down and has taken over serving leases for the peer.

The handler for heartbeat command is going to be implemented within HA hooks library.

During normal operation, the heartbeat command will only be sent to the peer if the period when no lease updates are sent to the peer is longer than configured. The timer counting the time to the next heartbeat will be set after completion of any command sent to the peer. The lease update (or any other command) will cancel this timer. If no command is sent before the timer expires, the heartbeat command is sent. If the lease update fails due to an IO error or response message indicates some other error, the server considers this case as if the heartbeat has failed. It may be followed by the heartbeat message to retrieve the state of the peer.

The periodic heartbeat will be implemented in terms of IntervalTimer class. The timer will be created and controlled within HA hooks library. The events associated with the timer will be triggered by the global IOService instance associated with the server. Currently, this object is created and returned by the !Dhcpv4Srv class and is not accessible by hooks libraries. We'll need to move initialization of this object to a global scope where hooks can access it.

Failure Detection

We have discussed two ways of detecting whether a peer is possibly down. One of them is through a control channel where lease updates and the heartbeat command are sent. An IO error during this transmission is an indication that the peer may be down. However, this is usually not enough to transition to the "partner down" state because an error in the transmission may be temporary, or it may be a result of the peer's control agent crash, which doesn't necessarily mean that the DHCP server is not responding to the queries. Thus, the failure detection should also involve some heuritics to detect whether the peer is responding to the DHCP queries from the clients. The running server instance should be looking into secs field of the DHCPv4 messages and/or elapsed time option of the DHCPv6 messages and compare those values against configured maximum values. These values indicate how long the clients have been trying to conact the DHCP servers. Long times indicate that the server is not responding to those queries and is likely to be offline. Care should be taken, though, because malicious clients can purposely send high elapsed time values to cause the peers to transition to the partner down states while both of them will be running. Thus, the servers must not transition to the partner down state until both transmission over the control channel appears to be broken and the elapsed times are high.

UML Diagrams

This section contains state and sequence diagrams for the DHCP servers running in HA configuration.

Server Startup

The following state diagram describes the situation when the server is first started or comes back online after a failure, until it begins to serve leases.

When the server is started it begins, as usual, with reading and parsing configuration. If memfile lease database backend is in use, it reads leases stored in the lease file(s). If the server is not in a "failover relationship" with any server, it simply starts serving DHCP clients. This is no different than Kea 1.3.0 behavior.

If the HA configuration is available and the server is in the relationship with another server it must first check if its peer is available. This follows a regular procedure of checking if the peer responds to commands (e.g. returns its status) and checking if the DHCP messages targetted at the peer are timely answered. If the peer appears offline as a result of this verification, the server will transition into the partner down state, in which case it enables DHCP service and serves all clients (for itself and the partner).

If the peer is online (either ready or partner down), the waking up server will need to synchronize its leases with the peer. Therefore, it transitions to the syncing leases state. This state is returned to the partner if the partner asks about the server's state in the heartbeat message.

In order to avoid conflicts between the operating partner and the local database being synchronized, the server sends a command to the partner to cause it to stop DHCP service. Then, the server sends a command to fetch all leases from the partner and merges those leases into the local database. In case of the conflict, the newer lease 'wins'. When the merge is finished, both servers should have consistent lease information but none of them is running DHCP service.

The waking up server sets its state to 'ready' to indicate to the partner that it is now ready to perform load balancing. It then sends command to the partner to re-enable its DHCP service. The partner should now detect that the server is 'ready' and transition to the load balancing state. This means that from now on, the parter will only serve its own address pools/subnets. The waking up server will be sending heartbeat commands to the partner until it finds that the partner is in load balancing state. This will guarantee that the partner will not be serving leases belonging to the waking up server anymore. The server can now do load balancing. Both servers are now in load balancing state. Each of them is serving its own clients. The wake up procedure ends.

Server Operation

The following sequence diagram includes two clients and two servers. During the normal operation (load balancing), the client1 is served by the server1 and the client2 is served by the server2. The diagram includes the failure scenario when server1 starts to also serve client2, when server2 stops responding to DHCP queries and commands over the control channel.

The sequence starts with client1 performing 4-way exchange with the server1. The server1 synchronously notifies the server2 about the new lease being handed out. Between the DHCP requests, both servers are sending heartbeat messages over the control channel (the diagram only shows the heartbeat sent by server1 for simplicity). The server1 also sends synchronous lease update when the client1 renews its lease.

Then, the heartbeat is sent but no response is returned from the server2. This may indicate that the server2 is offline. The server1 doesn't transition into the partner down state yet. It monitors DHCP messages received from client2, which should be normally served by the server2. When the secs field value exceeds the maximum delay in server response the server1 finally assumes that the server2 is offline and the server1 start responding to queries targetted at server2. The lease updates are not sent to the server2 because this server is offline. While serving server2's clients the server1 continues to send heartbeat messsages to the server2.

Meanwhile, the server2 wakes up, so it sends syncing status to indicate that it is not quite ready yet. When the database is synchronized, the server2 starts sending ready status to indicate that it may now transition back to the load balancing state. At this point, the server1 stops responding to any queries for server2. It continues to respond to its own queries. The server2 will respond to subsequent queries from the client2.

New Commands

This section describes the syntax of the new commands required by the HA.

Disable DHCP Service

The following command causes the receiving DHCP server to cease DHCP service. The optional argument specifies a maximum period of time for the DHCP service to be disabled. If this value is specified, the server will re-enable DHCP service after the specified period of time if the _dhcp-enable_ command is not received within this time. The HA peers should specify this value to prevent the situations that waking up server stops its partner and dies before the service is enabled.

{
    "command": "dhcp-disable",
    "arguments": {
        "max-period": 20
    }
}

Enable DHCP Service

The following command enable DHCP service for all subnets. In the future we may extend this command to enable DHCP service for selected subnets.

{
    "command": "dhcp-enable"
}

Get All Leases

The following two commands retrieve all leases or all leases for specified subnets. If the _subnets_ argument is not specified, all leases are returned. This is useful when the lease database is synchronized with a peer after a failure.

{
    "command": "lease4-get-all",
    "arguments": {
        "subnets": [ 1, 2, 3, 4 ]
    }
}

For the DHCPv6 case:

{
    "command": "lease6-get-all",
    "arguments": {
        "subnets": [ 1, 2, 3, 4 ]
    }
}

Heartbeat Command

The heartbeat commands are sent between the HA peers to detect if failures. In the fatal failure case (e.g. server crash) no response will be received from the peer and the heartbeat will be lost. If the peer is online (e.g. waking up or ready for service), the server status will be returned.

{
    "command": "ha-heartbeat"
}

and the response format:

{
    "result": 0,
    "text": "HA peer status returned.",
    "arguments": {
        "status": "syncing" | "ready" | "load-balancing" | "partner-down"
    }
}

Location of Functions Required for HA

The following section lists new "functions" in Kea required for HA. Provided that the HA is going to be an optional feature, we should consider which parts of its implementation can be provided in hooks libraries and which require extensions to the Kea core code. The table below provides an information for each new function whether it fits into a hook library or/and a Core. It also makes an assessment regarding the preferred location out of those two, along with a commentary explaining the choice.

Hooks Core Preferred Comments
Load balancing algorithmYesYesHooksPrefer hooks because it provides more flexibility as to how the algorithm works.
HA configurationYesYesHooksPrefer hooks on the ground that Kea configuration is already complex. We can use user-context for it.
Long lived connectionsNoYesCoreRESTful API is entirely implemented in the core.
Communication with peersYesYesCoreThis is rather generic functionality for the server to be able to communicate with another server. So, probably better to implement it in Core.
Periodic heartbeatYesYesHooksChanges to Core are required to provide access to common IOService instance. The timer itself should be managed in hooks.
Failure detection algorithmsYesYesHooks Implementing in hooks gives more flexibility.
Leases replicationYesYesHooksReplication is an optional mechanism (when HA is in use) so it is heavy stuff and better to keep it separate.
Command: cease/resume DHCP serviceNoYesCoreCould be done in hooks but would require updates to Core anyway.
Commands for lease manipulationYesNoHooksWe already have lease manipulation hook lib.
Command: heartbeatYesYesHooksHA specific so better to put it into a hook.
Commands for triggering lease syncYesYesHooksHA specific so better to put it into a hook.
Using foreign server identifier by peerYesYesHooksHA specific so better to put it into a hook.

Tasks Required for HA Implementation

  1. Create stub libdhcp_ha hooks library
  2. Implement HA configuration parsing in the hooks library
  3. Implement load balancing algorithm with assigning appropriate classes to the received packet
  4. Add support for persistent connections into Control Agent using HTTP 1.1
  5. Create HTTP client classes in libkea-http.
  6. ....
Last modified 18 hours ago Last modified on Nov 22, 2017, 2:42:54 PM

Attachments (10)

Download all attachments as: .zip