Designing Octopus HA On-Premises

This section walks through the different options and considerations for the components required when setting up Octopus High Availability for an on-premises install of Octopus Deploy.

Setting up Octopus: High Availability

The guide assumes that all of the servers are on-premises and are part of an Active Directory domain, as this is the most common configuration. Octopus High Availability can work without the servers being part of an AD domain, but you’ll need to vary the instructions accordingly.

Some assembly required

While a single server Octopus installation is easy, Octopus High Availability is designed for mission critical enterprise scenarios and depends heavily on infrastructure and Microsoft components. At a minimum:

  • You should be familiar with SQL Server failover clustering, or have DBAs available to create and manage the database.
  • You should be familiar with SANs or other approaches to sharing storage between servers.
  • You should be familiar with load balancing for applications.

Compute

When running Octopus Deploy Windows Server, the underlying OS can be installed on a bare-metal machine or on a virtual machine (VM) hosted by any popular type-1 hypervisor. Type-2 hypervisors can work for demos and POCs, but because they are typically installed on desktop operating systems, aren’t recommended.

We recommend starting with either 2 cores / 4 GB of RAM or 4 cores / 8 GB of RAM and limiting the task cap to 20 for each node. In our experience, it is much better to have 4 smaller VMs, each with 4 cores / 8 GB of RAM than 2 large VMs, each with 8 cores / 16 GB of RAM. With 2 servers, if one of them were to go down, you’d lose 50% of your capacity. With 4 servers, if one of them were to go down, you’d lose 25% of your capacity. The difference in cost between the 4 smaller VMs and 2 large VMs is typically minimal.

Due to how Octopus stores the paths to various BLOB data (task logs, artifacts, packages, imports, event exports etc.), you cannot run a mix of both Windows Servers, and Octopus Linux containers connected to the same Octopus Deploy instance. A single instance should only be hosted using one method.

Database

Each Octopus Server node stores project, environment and deployment-related data in a shared Microsoft SQL Server Database. Since this database is shared, it’s important that the database server is also highly available.

How the database is made highly available is really up to you; to Octopus, it’s just a connection string. We are not experts on SQL Server high availability, so if you have an on-site DBA team, we recommend using them. There are many options for high availability with SQL Server, and Brent Ozar also has a fantastic set of resources on SQL Server Failover Clustering if you are looking for an introduction and practical guide to setting it up.

Octopus High Availability works with:

Octopus High Availability has not been tested with Log Shipping or Database Mirroring, and does not support SQL Server replication. More information

See also the SQL Server Database page, which explains the editions and versions of SQL Server that Octopus supports and explains the requirements for how the database must be configured.

Since each of the Octopus Server nodes will need access to the database, we recommend creating a special user account in Active Directory with db_owner permission on the Octopus database and using that account as the service account when configuring Octopus.

Shared storage

Octopus stores several files that are not suitable to store in the database. These include:

  • Packages used by the built-in repository. These packages can often be very large in size.
  • Artifacts collected during a deployment. Teams using Octopus sometimes use this feature to collect large log files and other files from machines during a deployment.
  • Task logs are text files that store all of the log output from deployments and other tasks.
  • Imported zip files used by the Export/Import Projects feature.
  • Archived audit logs by the Archived audit logs feature.

As with the database, you’ll tell the Octopus Servers where to store them as a file path within your operating system. The shared storage needs to be accessible by all Octopus nodes. Each of these three types of data can be stored in a different location.

Whichever way you provide the shared storage, there are a few considerations to keep in mind:

  • To Octopus, it needs to appear as either:
    • A mapped network drive e.g. X:\

    • A UNC path to a file share e.g. \\server\share

    • A symbolic link pointing at a local folder, e.g.

      C:\OctopusShared\Artifacts <<===>> \\server\share\Artifacts

  • The service account that Octopus runs needs full control over the directory.
  • Drives are mapped per-user, so you should map the drive using the same service account that Octopus is running under.

The simplest way to provide shared storage, assuming the Octopus Server nodes are part of the same Active Directory domain, is by creating a file share that each of the Octopus Server nodes can access. Of course, this assumes that the underlying directory is reliable, such as in a RAID array.

An alternative is Microsoft DFS. If using Microsoft DFS for the shared storage, it must be configured specifically for use with Octopus Deploy.

Load balancer

When you configured the first Octopus Server node, as well as each of the subsequent nodes, you configured the HTTP endpoint that the Octopus web interface is available on. The final step is to configure a load balancer so that user traffic is directed between each of the Octopus Server nodes.

Octopus can work with any load balancer technology, including hardware and software load balancers.

Octopus Server provides a health check endpoint for your load balancer to ping: /api/octopusservernodes/ping.

Making a standard HTTP GET request to this URL on your Octopus Server nodes will return:

  • HTTP Status Code 200 OK as long as the Octopus Server node is online and not in drain mode.
  • HTTP Status Code 418 I'm a teapot when the Octopus Server node is online, but it is currently in drain mode preparing for maintenance.
  • Anything else indicates the Octopus Server node is offline, or something has gone wrong with this node.

The Octopus Server node configuration is also returned as JSON in the HTTP response body.

We typically recommend using a round-robin (or similar) approach for sharing traffic between the nodes in your cluster, as the Octopus Web Portal is stateless.

All package uploads are sent as a POST to the REST API endpoint /api/[SPACE-ID]/packages/raw. Because the REST API will be behind a load balancer, you’ll need to configure the following on the load balancer:

  • Timeout: Octopus is designed to handle 1 GB+ packages, which takes longer than the typical http/https timeout to upload.
  • Request Size: Octopus does not have a size limit on the request body for packages. Some load balancers only allow 2 or 3 MB files by default.

Software load balancers

If you don’t have a hardware load balancer available, an easy option is the Application Request Routing module for IIS. You can also use Apache or NGINX as a reverse load-balancing proxy.

For more information on setting up a reverse proxy with Octopus Deploy we have the following guides:

Polling Tentacles with HA

Listening Tentacles require no special configuration for Octopus High Availability. Polling Tentacles, however, poll a server at regular intervals to check if there are any tasks waiting for the Tentacle to perform. In a High Availability scenario Polling Tentacles must poll all of the Octopus Server nodes in your configuration.

Connecting Polling Tentacles

Whilst a Tentacle could poll a load balancer in an Octopus High Availability cluster, there is a risk, depending on your load balancer configuration, that the Tentacle will not poll all servers in a timely manner.

We recommend two options when configuring Polling Tentacles to connect to your Octopus High Availability cluster:

  • Using a unique address, and the same listening port (10943 by default) for each node.
  • Using the same address and a unique port for each node.

These are discussed further in the next sections.

Using a unique address

In this scenario, no load balancer is required. Instead, each Octopus node would be configured to listen on the same port (10943 by default) for inbound traffic. In addition, each node would be able to be reached directly by your Polling Tentacle on a unique address for the node.

For each node in your HA cluster:

  • Ensure the communication port Octopus listens on (10943 by default) is open, including any firewall.
  • Register the node with the Poll Server command line option. Specify the unique address for the node, including the listening port. For example, in a three-node cluster:
    • Node1 would use address: Octo1.domain.com:10943
    • Node2 would use address: Octo2.domain.com:10943
    • Node3 would use address: Octo3.domain.com:10943

The important thing to remember is that each node should be using a unique address and the same port.

Tip: A Polling Tentacle will connect to the Octopus Rest API over ports 80 or 443 when it is registering itself with the Octopus Server. After that, it will connect over port 10943 (by default) with the Octopus Server node.

It’s important to ensure that any firewalls also allow port 80 or 443 for the initial Tentacle registration.

Using a unique port

In this scenario, a type of Network Address Translation (NAT) is leveraged by using the same address and unique ports, usually routed through a load balancer or other network device. Each Octopus node would be configured to listen on a different port (starting at 10943 by default) for inbound traffic.

The advantage of using unique ports is that the Polling Tentacle doesn’t need to know each node’s address, only the port. The address translation is handled by the load balancer. This allows each node to have a private IP address, with no public access from outside your network required.

Imagine a three-node HA cluster. For each one, we expose a different port to listen on using the Octopus.Server configure command:

  • Node1 - Port 10943
  • Node2 - Port 10944
  • Node3 - Port 10945

Next on the load balancer, create Network Address Translation (NAT) rules and point them to each node in your HA Cluster:

  • Open port 10943 and route traffic to Node1 in your HA Cluster
  • Open port 10944 and route traffic to Node2 in your HA Cluster
  • Open port 10945 and route traffic to Node3 in your HA Cluster
  • Continue for any additional nodes in your HA cluster.

If you configured your nodes to use a different listening port, replace 10943-10945 with your port range.

The important thing to remember is that each node should be using the same address and a different port.

Registering Polling Tentacles

There are two options to add Octopus Servers to a Polling Tentacle, via the command-line or via editing the Tentacle.config file directly.

Command line:

Configuring a Polling Tentacle via the command-line is the preferred option with the command executed once per server; an example command using the default instance can be seen below:

C:\Program Files\Octopus Deploy\Tentacle>Tentacle poll-server --server=http://my.Octopus.server --apikey=API-77751F90F9EEDCEE0C0CD84F7A3CC726AD123FA6

For more information on this command please refer to the Tentacle Poll Server command line options.

Tentacle.config:

Alternatively you can edit Tentacle.config directly to add each Octopus Server (this is interpreted as a JSON array of servers). This method is not recommended as the Tentacle service for each server will need to be restarted to accept incoming connections via this method.

<set key="Tentacle.Communication.TrustedOctopusServers">
[
  {"Thumbprint":"77751F90F9EEDCEE0C0CD84F7A3CC726AD123FA6","CommunicationStyle":2,"Address":"https://10.0.255.160:10943","Squid":null,"SubscriptionId":"poll://g3662re9njtelsyfhm7t/"},
  {"Thumbprint":"77751F90F9EEDCEE0C0CD84F7A3CC726AD123FA6","CommunicationStyle":2,"Address":"https://10.0.255.161:10943","Squid":null,"SubscriptionId":"poll://g3662re9njtelsyfhm7t/"},
  {"Thumbprint":"77751F90F9EEDCEE0C0CD84F7A3CC726AD123FA6","CommunicationStyle":2,"Address":"https://10.0.255.162:10943","Squid":null,"SubscriptionId":"poll://g3662re9njtelsyfhm7t/"}
]
</set>

Notice there is an address entry for each Octopus Server in the High Availability configuration.

Help us continuously improve

Please let us know if you have any feedback about this page.

Send feedback

Page updated on Sunday, January 1, 2023