Octopus Deploy Documentation

Designing Octopus HA On-Premises

Last updated

This section walks through the different options and considerations for the components required when setting up Octopus High Availability for an on-premises install of Octopus Deploy.

If you are setting Octopus up in a private cloud such as Azure or AWS please see the following guides:

Setting up Octopus: High availability

For the sake of simplicity, the guide assumes that all of the servers are on-premises and are part of an Active Directory domain, as this is the most common configuration. Octopus High Availability can work without the servers being part of an AD domain, but you'll need to vary the instructions accordingly.

Some assembly required
While a single server Octopus installation is easy, Octopus High Availability is designed for mission critical enterprise scenarios and depends heavily on infrastructure and Windows components. At a minimum:

  • You should be familiar with SQL Server failover clustering, or have DBAs available to create and manage the database.
  • You should be familiar with SANs or other approaches to sharing storage between servers.
  • You should be familiar with load balancing for applications.

Database

Each Octopus Server node stores project, environment and deployment-related data in a shared Microsoft SQL Server Database. Since this database is shared, it's important that the database server is also highly available.

From the Octopus perspective, how the database is made highly available is really up to you; to Octopus, it's just a connection string. We are not experts on SQL Server high availability, so if you have an on-site DBA team, we recommend using them. There are many options for high availability with SQL Server, and Brent Ozar also has a fantastic set of resources on SQL Server Failover Clustering if you are looking for an introduction and practical guide to setting it up.

Octopus High Availability works with:

Octopus High Availability has not been tested with Log Shipping or Database Mirroring, and does not support SQL Server replication. More information

See also the SQL Server Database page, which explains the editions and versions of SQL Server that Octopus supports and explains the requirements for how the database must be configured.

Since each of the Octopus Server nodes will need access to the database, we recommend creating a special user account in Active Directory with db_owner permission on the Octopus database and using that account as the service account when configuring Octopus.

Shared storage

Octopus stores a number of files that are not suitable to store in the database. These include:

  • Packages used by the built-in repository. These packages can often be very large in size.
  • Artifacts collected during a deployment. Teams using Octopus sometimes use this feature to collect large log files and other files from machines during a deployment.
  • Task logs, which are text files that store all of the log output from deployments and other tasks.

As with the database, from the Octopus perspective, you'll simply tell the Octopus Servers where to store them as a file path within your operating system. Octopus doesn't really care what technology you use to present the shared storage, it could be a mapped network drive, or a UNC path to a file share.Β Each of these three types of data can be stored in a different place.

Whichever way you provide the shared storage, there are a few considerations to keep in mind:

  • To Octopus, it needs to appear as a mapped network drive (e.g., D:\) or a UNC path to a file share (e.g., \\server\path).
  • The service account that Octopus runs as needs full control over the directory.
  • Drives are mapped per-user, so you should map the drive using the same service account that Octopus is running under.

The simplest way to provide shared storage, assuming the Octopus Server nodes are part of the same Active Directory domain, is by creating a file share that each of the Octopus Server nodes can access. Of course, this assumes that the underlying directory is reliable, such as in a RAID array.

A better alternative is Microsoft DFS or a SAN.

Load balancer

When you configured the first Octopus Server node, as well as each of the subsequent nodes, you configured the HTTP endpoint that the Octopus web interface is available on. The final step is to configure a load balancer so that user traffic is directed between each of the Octopus Server nodes.

Octopus can work with any load balancer technology, including hardware and software load balancers.

Load balancer session persistence

We typically recommend using a round-robin (or similar) approach for sharing traffic between the nodes in your cluster, as the Octopus Web Portal is stateless.

However, each node in the cluster keeps a local cache of data including user permissions. There is a known issue that occurs when a users permissions change. The local cache is only invalidated on the node where the change was made. This will be resolved in a future version of Octopus.

To work around this issue in the meantime, you can configure your load balancer with session persistence. This will ensure user sessions are routed to the same node.

Software load balancers

If you don't have a hardware load balancer available, an easy option is the Application Request Routing module for IIS. You can also use Apache or NGINX as a reverse load-balancing proxy.

For more information on setting up a reverse proxy with Octopus Deploy we have the following guides:

Polling Tentacles with HA

Listening Tentacles require no special configuration for Octopus High Availability. Polling Tentacles, however, poll a server at regular intervals to check if there are any tasks waiting for the Tentacle to perform. In a High Availability scenario Polling Tentacles must poll all of the Octopus Server nodes in your configuration.

Connecting Polling Tentacles

Whilst a Tentacle could poll a load balancer in an Octopus High Availability cluster, there is a risk, depending on your load balancer configuration, that the Tentacle will not poll all servers in a timely manner.

We recommend two options when configuring Polling Tentacles to connect to your Octopus High Availability cluster:

  • Using a unique address, and the same listening port (10943 by default) for each node.
  • Using the same address and a unique port for each node.

These are discussed further in the next sections.

Using a unique address

In this scenario, no load balancer is required. Instead, each Octopus node would be configured to listen on the same port (10943 by default) for inbound traffic. In addition, each node would be able to be reached directly by your Polling Tentacle on a unique address for the node.

For each node in your HA cluster:

  • Ensure the communication port Octopus listens on (10943 by default) is open, including any firewall.
  • Register the node with the Poll Server command line option. Specify the unique address for the node, including the listening port. For example, in a three-node cluster:
    • Node1 would use address: Octo1.domain.com:10943
    • Node2 would use address: Octo2.domain.com:10943
    • Node3 would use address: Octo3.domain.com:10943

The important thing to remember is that each node should be using a unique address and the same port.

Tip:
A Polling Tentacle will connect to the Octopus Rest API over ports 80 or 443 when it is registering itself with the Octopus Server. After that, it will connect over port 10943 (by default) with the Octopus Server node.

It's important to ensure that any firewalls also allow port 80 or 443 for the initial Tentacle registration.

Using a unique port

In this scenario, a type of Network Address Translation (NAT) is leveraged by using the same address and unique ports, usually routed through a load balancer or other network device. Each Octopus node would be configured to listen on a different port (starting at 10943 by default) for inbound traffic.

The advantage of using unique ports is that the Polling Tentacle doesn't need to know each node's address, only the port. The address translation is handled by the load balancer. This allows each node to have a private IP address, with no public access from outside your network required.

Imagine a three-node HA cluster. For each one, we expose a different port to listen on using the Octopus.Server configure command:

  • Node1 - Port 10943
  • Node2 - Port 10944
  • Node3 - Port 10945

Next on the load balancer, create Network Address Translation (NAT) rules and point them to each node in your HA Cluster:

  • Open port 10943 and route traffic to Node1 in your HA Cluster
  • Open port 10944 and route traffic to Node2 in your HA Cluster
  • Open port 10945 and route traffic to Node3 in your HA Cluster
  • Continue for any additional nodes in your HA cluster.

If you configured your nodes to use a different listening port, replace 10943-10945 with your port range.

The important thing to remember is that each node should be using the same address and a different port.

Registering Polling Tentacles

There are two options to add Octopus Servers to a Polling Tentacle, via the command-line or via editing the Tentacle.config file directly.

Command line:

Configuring a Polling Tentacle via the command-line is the preferred option with the command executed once per server; an example command using the default instance can be seen below:

C:\Program Files\Octopus Deploy\Tentacle>Tentacle poll-server --server=http://my.Octopus.server --apikey=API-77751F90F9EEDCEE0C0CD84F7A3CC726AD123FA6

For more information on this command please refer to the Tentacle Poll Server command line options.

Tentacle.config:

Alternatively you can edit Tentacle.config directly to add each Octopus Server (this is interpreted as a JSON array of servers). This method is not recommended as the Tentacle service for each server will need to be restarted to accept incoming connections via this method.

<set key="Tentacle.Communication.TrustedOctopusServers">
[
  {"Thumbprint":"77751F90F9EEDCEE0C0CD84F7A3CC726AD123FA6","CommunicationStyle":2,"Address":"https://10.0.255.160:10943","Squid":null,"SubscriptionId":"poll://g3662re9njtelsyfhm7t/"},
  {"Thumbprint":"77751F90F9EEDCEE0C0CD84F7A3CC726AD123FA6","CommunicationStyle":2,"Address":"https://10.0.255.161:10943","Squid":null,"SubscriptionId":"poll://g3662re9njtelsyfhm7t/"},
  {"Thumbprint":"77751F90F9EEDCEE0C0CD84F7A3CC726AD123FA6","CommunicationStyle":2,"Address":"https://10.0.255.162:10943","Squid":null,"SubscriptionId":"poll://g3662re9njtelsyfhm7t/"}
]
</set>

Notice there is an address entry for each Octopus Server in the High Availability configuration.

Need support? We're here to help.