Designing Octopus HA in Azure

This section walks through the different options and considerations for the components required to set up Octopus High Availability in Microsoft Azure.

Setting up Octopus: High availability

This guide assumes that all of the servers are hosted in Azure running Windows Server.

Some assembly required

Octopus High Availability is designed for mission-critical enterprise scenarios and depends heavily on infrastructure and Microsoft components. At a minimum:

  • You should be familiar with SQL Server failover clustering, Azure SQL, or have DBAs available to create and manage the database.
  • You should be familiar with SANs, Azure Files, or other approaches to sharing storage between servers.
  • You should be familiar with load balancing for applications.

IaaS vs PaaS: If you are planning on using IaaS exclusively in Azure and don’t intend to use their PaaS offerings (such as Azure SQL), then the On-Premises guide might be a better approach for you as management of your virtual machines, Domain Controllers, SQL Database Servers, and load balancers will be your responsibility.

Compute

For a highly available Octopus configuration, you need a minimum of two Virtual Machines in Azure. There are several items to consider when provisioning your Octopus Virtual Machines in Azure:

Each organization has different requirements when it comes to choosing the right Virtual Machine to run Octopus on. Review the range of Azure Virtual Machine sizes and select the size most suitable for your requirements.

We recommend starting with either 2 cores / 4 GB of RAM or 4 cores / 8 GB of RAM and limiting the task cap to 20 for each node. In our experience, it is much better to have 4 smaller VMs, each with 4 cores / 8 GB of RAM than 2 large VMs, each with 8 cores / 16 GB of RAM. With 2 servers, if one of them were to go down, you’d lose 50% of your capacity. With 4 servers, if one of them were to go down, you’d lose 25% of your capacity. The difference in cost between the 4 smaller VMs and 2 large VMs is typically minimal.

Due to how Octopus stores the paths to various BLOB data (task logs, artifacts, packages, imports, event exports etc.), you cannot run a mix of both Windows Servers, and Octopus Linux containers connected to the same Octopus Deploy instance. A single instance should only be hosted using one method.

Database

Each Octopus Server node stores project, environment, and deployment-related data in a shared Microsoft SQL Server Database. Since this database is shared, it’s important that the database server is also highly available. To host the Octopus SQL database in Azure, there are two options to consider:

How the database is made highly available is really up to you; to Octopus, it’s just a connection string. We are not experts on SQL Server high availability, so if you have an on-site DBA team, we recommend using them. There are many options for high availability with SQL Server, and Brent Ozar also has a fantastic set of resources on SQL Server Failover Clustering if you are looking for an introduction and practical guide to setting it up.

Octopus High Availability works with:

And also:

Octopus High Availability has not been tested with Log Shipping or Database Mirroring, and does not support SQL Server replication. More information

See also the SQL Server Database page, which explains the editions and versions of SQL Server that Octopus supports and explains the requirements for how the database must be configured.

Shared storage

Octopus stores several files that are not suitable to store in the database. These include:

  • Packages used by the built-in repository. These packages can often be very large in size.
  • Artifacts collected during a deployment. Teams using Octopus sometimes use this feature to collect large log files and other files from machines during a deployment.
  • Task logs are text files that store all of the log output from deployments and other tasks.
  • Imported zip files used by the Export/Import Projects feature.
  • Archived audit logs by the Archived audit logs feature.

As with the database, you’ll tell the Octopus Servers where to store them as a file path within your operating system. The shared storage needs to be accessible by all Octopus nodes. Each of these three types of data can be stored in a different location.

Whichever way you provide the shared storage, there are a few considerations to keep in mind:

  • To Octopus, it needs to appear as either:
    • A mapped network drive e.g. X:\

    • A UNC path to a file share e.g. \\server\share

    • A symbolic link pointing at a local folder, e.g.

      C:\OctopusShared\Artifacts <<===>> \\server\share\Artifacts

  • The service account that Octopus runs needs full control over the directory.
  • Drives are mapped per-user, so you should map the drive using the same service account that Octopus is running under.

If your Octopus Server is running in Microsoft Azure, you can use Azure Files; it presents a file share over SMB 3.0.

Azure Files

If your Octopus Server is running in Microsoft Azure, there is only one solution unless you have a DFS Replica in Azure. That solution is Azure Files which presents a file share over SMB 3.0 that can be shared across all of your Octopus servers.

After you have created your File Share, the best option is to add the Azure File Share as a symbolic link pointing at a local folder, for example C:\Octopus\ for the Artifacts, Packages, TaskLogs, Imports and EventExports which need to be available to all nodes.

Run the PowerShell below before installing Octopus, substituting the placeholders with your own values:

# Add the Authentication for the symbolic links. You can get this from the Azure Portal.

cmdkey /add:octostorage.file.core.windows.net /user:Azure\octostorage /pass:XXXXXXXXXXXXXX

# Add Octopus folder to add symbolic links

New-Item -ItemType directory -Path C:\Octopus

# Add the Symbolic Links. Do this before installing Octopus.

New-Item -Path C:\Octopus\TaskLogs -ItemType SymbolicLink -Value \\octostorage.file.core.windows.net\octoha\TaskLogs
New-Item -Path C:\Octopus\Artifacts -ItemType SymbolicLink -Value \\octostorage.file.core.windows.net\octoha\Artifacts
New-Item -Path C:\Octopus\Packages -ItemType SymbolicLink -Value \\octostorage.file.core.windows.net\octoha\Packages
New-Item -Path C:\Octopus\Imports -ItemType SymbolicLink -Value \\octostorage.file.core.windows.net\octoha\Imports
New-Item -Path C:\Octopus\EventExports -ItemType SymbolicLink -Value \\octostorage.file.core.windows.net\octoha\EventExports

It’s worth noting that you need to have created the folders within the Azure File Share first before trying to create the Symbolic Links.

Install Octopus and then run the following:

# Set the path 
& 'C:\Program Files\Octopus Deploy\Octopus\Octopus.Server.exe' path --artifacts "C:\Octopus\Artifacts"
& 'C:\Program Files\Octopus Deploy\Octopus\Octopus.Server.exe' path --taskLogs "C:\Octopus\TaskLogs"
& 'C:\Program Files\Octopus Deploy\Octopus\Octopus.Server.exe' path --nugetRepository "C:\Octopus\Packages"
& 'C:\Program Files\Octopus Deploy\Octopus\Octopus.Server.exe' path --imports "C:\Octopus\Imports"
& 'C:\Program Files\Octopus Deploy\Octopus\Octopus.Server.exe' path --eventExports "C:\Octopus\EventExports"

Load balancing in Azure

To distribute HTTP load among Octopus Server nodes with a single point of access, we recommended using an HTTP load balancer.

Octopus Server provides a health check endpoint for your load balancer to ping: /api/octopusservernodes/ping.

Making a standard HTTP GET request to this URL on your Octopus Server nodes will return:

  • HTTP Status Code 200 OK as long as the Octopus Server node is online and not in drain mode.
  • HTTP Status Code 418 I'm a teapot when the Octopus Server node is online, but it is currently in drain mode preparing for maintenance.
  • Anything else indicates the Octopus Server node is offline, or something has gone wrong with this node.

The Octopus Server node configuration is also returned as JSON in the HTTP response body.

We typically recommend using a round-robin (or similar) approach for sharing traffic between the nodes in your cluster, as the Octopus Web Portal is stateless.

All package uploads are sent as a POST to the REST API endpoint /api/[SPACE-ID]/packages/raw. Because the REST API will be behind a load balancer, you’ll need to configure the following on the load balancer:

  • Timeout: Octopus is designed to handle 1 GB+ packages, which takes longer than the typical http/https timeout to upload.
  • Request Size: Octopus does not have a size limit on the request body for packages. Some load balancers only allow 2 or 3 MB files by default.

Azure has a wide range of load balancers that will work with Octopus in a highly-available configuration:

Authentication providers

We recommend Active Directory for most installations. For this to work in Azure you need a domain controller setup locally in Azure. Please see our authentication provider compatibility section for a full list of supported authentication providers.

If you’re hosting in Azure with Domain Controllers, it would be a similar setup as described in our on-premises guide.

Polling Tentacles with HA

Listening Tentacles require no special configuration for Octopus High Availability. Polling Tentacles, however, poll a server at regular intervals to check if there are any tasks waiting for the Tentacle to perform. In a High Availability scenario Polling Tentacles must poll all of the Octopus Server nodes in your configuration.

Connecting Polling Tentacles

Whilst a Tentacle could poll a load balancer in an Octopus High Availability cluster, there is a risk, depending on your load balancer configuration, that the Tentacle will not poll all servers in a timely manner.

We recommend two options when configuring Polling Tentacles to connect to your Octopus High Availability cluster:

  • Using a unique address, and the same listening port (10943 by default) for each node.
  • Using the same address and a unique port for each node.

These are discussed further in the next sections.

Using a unique address

In this scenario, no load balancer is required. Instead, each Octopus node would be configured to listen on the same port (10943 by default) for inbound traffic. In addition, each node would be able to be reached directly by your Polling Tentacle on a unique address for the node.

For each node in your HA cluster:

  • Ensure the communication port Octopus listens on (10943 by default) is open, including any firewall.
  • Register the node with the Poll Server command line option. Specify the unique address for the node, including the listening port. For example, in a three-node cluster:
    • Node1 would use address: Octo1.domain.com:10943
    • Node2 would use address: Octo2.domain.com:10943
    • Node3 would use address: Octo3.domain.com:10943

The important thing to remember is that each node should be using a unique address and the same port.

Tip: A Polling Tentacle will connect to the Octopus Rest API over ports 80 or 443 when it is registering itself with the Octopus Server. After that, it will connect over port 10943 (by default) with the Octopus Server node.

It’s important to ensure that any firewalls also allow port 80 or 443 for the initial Tentacle registration.

Using a unique port

In this scenario, a type of Network Address Translation (NAT) is leveraged by using the same address and unique ports, usually routed through a load balancer or other network device. Each Octopus node would be configured to listen on a different port (starting at 10943 by default) for inbound traffic.

The advantage of using unique ports is that the Polling Tentacle doesn’t need to know each node’s address, only the port. The address translation is handled by the load balancer. This allows each node to have a private IP address, with no public access from outside your network required.

Imagine a three-node HA cluster. For each one, we expose a different port to listen on using the Octopus.Server configure command:

  • Node1 - Port 10943
  • Node2 - Port 10944
  • Node3 - Port 10945

Next on the load balancer, create Network Address Translation (NAT) rules and point them to each node in your HA Cluster:

  • Open port 10943 and route traffic to Node1 in your HA Cluster
  • Open port 10944 and route traffic to Node2 in your HA Cluster
  • Open port 10945 and route traffic to Node3 in your HA Cluster
  • Continue for any additional nodes in your HA cluster.

If you configured your nodes to use a different listening port, replace 10943-10945 with your port range.

The important thing to remember is that each node should be using the same address and a different port.

Registering Polling Tentacles

There are two options to add Octopus Servers to a Polling Tentacle, via the command-line or via editing the Tentacle.config file directly.

Command line:

Configuring a Polling Tentacle via the command-line is the preferred option with the command executed once per server; an example command using the default instance can be seen below:

C:\Program Files\Octopus Deploy\Tentacle>Tentacle poll-server --server=http://my.Octopus.server --apikey=API-77751F90F9EEDCEE0C0CD84F7A3CC726AD123FA6

For more information on this command please refer to the Tentacle Poll Server command line options.

Tentacle.config:

Alternatively you can edit Tentacle.config directly to add each Octopus Server (this is interpreted as a JSON array of servers). This method is not recommended as the Tentacle service for each server will need to be restarted to accept incoming connections via this method.

<set key="Tentacle.Communication.TrustedOctopusServers">
[
  {"Thumbprint":"77751F90F9EEDCEE0C0CD84F7A3CC726AD123FA6","CommunicationStyle":2,"Address":"https://10.0.255.160:10943","Squid":null,"SubscriptionId":"poll://g3662re9njtelsyfhm7t/"},
  {"Thumbprint":"77751F90F9EEDCEE0C0CD84F7A3CC726AD123FA6","CommunicationStyle":2,"Address":"https://10.0.255.161:10943","Squid":null,"SubscriptionId":"poll://g3662re9njtelsyfhm7t/"},
  {"Thumbprint":"77751F90F9EEDCEE0C0CD84F7A3CC726AD123FA6","CommunicationStyle":2,"Address":"https://10.0.255.162:10943","Squid":null,"SubscriptionId":"poll://g3662re9njtelsyfhm7t/"}
]
</set>

Notice there is an address entry for each Octopus Server in the High Availability configuration.

Help us continuously improve

Please let us know if you have any feedback about this page.

Send feedback

Page updated on Sunday, January 1, 2023