Designing Octopus HA in GCP

This section walks through the different options and considerations for the components required to set up Octopus High Availability in Google Cloud Platform (GCP).

Setting up Octopus: High Availability

This guide assumes that all of the servers used for your Octopus High Availability instance are hosted in GCP and are running Windows Server.

Some assembly required

Octopus High Availability is designed for mission-critical enterprise scenarios and depends heavily on infrastructure and Microsoft components. At a minimum:

  • You should be familiar with SQL Server failover clustering, Cloud SQL, or have DBAs available to create and manage the database.
  • You should be familiar with SANs, Google Filestore, or other approaches to sharing storage between servers.
  • You should be familiar with load balancing for applications.

IaaS vs PaaS: If you are planning on using IaaS exclusively in GCP and don’t intend to use their PaaS offerings (such as Cloud SQL), then the On-Premises guide might be a better approach for you as management of your virtual machines, Domain Controllers, SQL Database Servers, and load balancers will be your responsibility.

Compute

For a highly available Octopus configuration, you need a minimum of two Virtual Machines running Windows Server (ideally 2016+) in GCP.

Each organization has different requirements when it comes to choosing the right Virtual Machine to run Octopus on. Review the range of GCP Compute instance machine families and select the type most suitable for your requirements. If you’re still unsure the General purpose machine family is a good option to start with as they are machines designed for common workloads, optimized for cost and flexibility.

We recommend starting with either 2 cores / 4 GB of RAM or 4 cores / 8 GB of RAM and limiting the task cap to 20 for each node. In our experience, it is much better to have 4 smaller VMs, each with 4 cores / 8 GB of RAM than 2 large VMs, each with 8 cores / 16 GB of RAM. With 2 servers, if one of them were to go down, you’d lose 50% of your capacity. With 4 servers, if one of them were to go down, you’d lose 25% of your capacity. The difference in cost between the 4 smaller VMs and 2 large VMs is typically minimal.

Google’s Compute Engine also provides machine type recommendations. These are automatically generated based on system metrics over time from your virtual machines. Resizing your instances can allow you to use resources more efficiently.

Due to how Octopus stores the paths to various BLOB data (task logs, artifacts, packages, imports, event exports etc.), you cannot run a mix of both Windows Servers, and Octopus Linux containers connected to the same Octopus Deploy instance. A single instance should only be hosted using one method.

Database

Each Octopus Server node stores project, environment, and deployment-related data in a shared Microsoft SQL Server Database. Since this database is shared, it’s important that the database server is also highly available. To host the Octopus SQL database in GCP, there are two options to consider:

How the database is made highly available is really up to you; to Octopus, it’s just a connection string. We are not experts on SQL Server high availability, so if you have an on-site DBA team, we recommend using them. There are many options for high availability with SQL Server, and Brent Ozar also has a fantastic set of resources on SQL Server Failover Clustering if you are looking for an introduction and practical guide to setting it up.

Octopus High Availability works with:

And also:

Octopus High Availability has not been tested with Log Shipping or Database Mirroring, and does not support SQL Server replication. More information

See also the SQL Server Database page, which explains the editions and versions of SQL Server that Octopus supports and explains the requirements for how the database must be configured.

Shared storage

Octopus stores several files that are not suitable to store in the database. These include:

  • Packages used by the built-in repository. These packages can often be very large in size.
  • Artifacts collected during a deployment. Teams using Octopus sometimes use this feature to collect large log files and other files from machines during a deployment.
  • Task logs are text files that store all of the log output from deployments and other tasks.
  • Imported zip files used by the Export/Import Projects feature.
  • Archived audit logs by the Archived audit logs feature.

As with the database, you’ll tell the Octopus Servers where to store them as a file path within your operating system. The shared storage needs to be accessible by all Octopus nodes. Each of these three types of data can be stored in a different location.

Whichever way you provide the shared storage, there are a few considerations to keep in mind:

  • To Octopus, it needs to appear as either:
    • A mapped network drive e.g. X:\

    • A UNC path to a file share e.g. \\server\share

    • A symbolic link pointing at a local folder, e.g.

      C:\OctopusShared\Artifacts <<===>> \\server\share\Artifacts

  • The service account that Octopus runs needs full control over the directory.
  • Drives are mapped per-user, so you should map the drive using the same service account that Octopus is running under.

Google Cloud Shared Storage Options

Octopus running on Windows works best with shared storage accessed via the Server Message Block (SMB) protocol.

Google Cloud offers its own managed file storage option known as Filestore, however it’s only accessible via the Network File System (NFS) protocol (v3).

For SMB storage, Google have partenered with NetApp to offer NetApp Cloud Volumes. This is a fully managed, cloud-based solution that runs on a Compute Engine virtual machine and uses a combination of persistent disks (PDs) and Cloud Storage buckets to store your NAS data.

Typically, NFS shares are better suited to Linux or macOS clients, although it is possible to access NFS shares on Windows Servers. NFS shares on Windows are mounted per-user and are not persisted when the server reboots. It’s for these reasons that Octopus recommends using SMB storage over NFS when running on Windows Servers.

You can see the different file server options Google Cloud has in their File Storage on Compute Engine overview.

NetApp Cloud Volumes

To successfully create a NetApp Cloud SMB Volume in Google Cloud, you must have an Active Directory service that can be used to connect to the SMB volume. Please see the creating and managing SMB volumes for further information. It’s also worth reviewing the security considerations for SMB access too.

Once you have configured your NetApp Cloud SMB Volume, the best option is to mount the SMB share and then create a symbolic link pointing at a local folder, for example C:\OctopusShared\ for the Artifacts, Packages, TaskLogs, Imports, and EventExports folders which need to be available to all nodes.

Before installing Octopus, follow the steps below on each Compute engine instance to mount your SMB share.

  1. In the Google Cloud Console, go to the Volumes page.

  2. Click the SMB volume for which you want to map an SMB share.

  3. Scroll to the right, click More (...), and then click Mount Instructions.

  4. Follow the instructions in the Mount Instructions for SMB window that appears.

  5. Create folders in your SMB share for the Artifacts, Packages, TaskLogs, Imports, and EventExports.

    Create folders in your SMB share

  6. Create the symbolic links for the Artifacts, Packages, TaskLogs, Imports, and EventExports folders.

    Run the following PowerShell script, substituting the placeholder values with your own:

    
    # (Optional) add the auth for the symbolic links. You can get the details from the Cloud Volume mount instructions.
    cmdkey /add:your-smb-share-address /user:username /pass:XXXXXXXXXXXXXX
    
    # Create the local folder to use to create the symbolic links within.
    $LocalFolder="C:\OctopusShared"
    
    if (-not (Test-Path -Path $LocalFolder)) {
       New-Item -ItemType directory -Path $LocalFolder    
    }
    
    $SmbShare = "\\your-smb-share\share-name"
    
    # Create symbolic links
    $ArtifactsFolder = Join-Path -Path $LocalFolder -ChildPath "Artifacts"
    if (-not (Test-Path -Path $ArtifactsFolder)) {
        New-Item -Path $ArtifactsFolder -ItemType SymbolicLink -Value "$SmbShare\Artifacts"
    }
    
    $PackagesFolder = Join-Path -Path $LocalFolder -ChildPath "Packages"
    if (-not (Test-Path -Path $PackagesFolder)) {
        New-Item -Path $PackagesFolder -ItemType SymbolicLink -Value "$SmbShare\Packages"
    }
    
    $TaskLogsFolder = Join-Path -Path $LocalFolder -ChildPath "TaskLogs"
    if (-not (Test-Path -Path $TaskLogsFolder)) {
        New-Item -Path $TaskLogsFolder -ItemType SymbolicLink -Value "$SmbShare\TaskLogs"
    }
    
    $ImportsFolder = Join-Path -Path $LocalFolder -ChildPath "Imports"
    if (-not (Test-Path -Path $ImportsFolder)) {
        New-Item -Path $ImportsFolder -ItemType SymbolicLink -Value "$SmbShare\Imports"
    }
    
    $EventExportsFolder = Join-Path -Path $LocalFolder -ChildPath "EventExports"
    if (-not (Test-Path -Path $EventExportsFolder)) {
        New-Item -Path $EventExportsFolder -ItemType SymbolicLink -Value "$SmbShare\EventExports"
    }

    Remember to create the folders in the SMB share before trying to create the symbolic links.

Once you’ve completed those steps, install Octopus and then when you’ve done that on all nodes, run the path command to change the paths to the shared storage:

& 'C:\Program Files\Octopus Deploy\Octopus\Octopus.Server.exe' path `
--artifacts "C:\OctopusShared\Artifacts" `
--nugetRepository "C:\OctopusShared\Packages" `
--taskLogs "C:\OctopusShared\TaskLogs" `
--imports "C:\OctopusShared\Imports" `
--eventExports "C:\OctopusShared\EventExports"

Changing the path only needs to be done once, and not on each node as the values are stored in the database.

Filestore using NFS

Once you have created a Filestore instance, the best option is to mount the NFS share using the LocalSystem account, and then create a symbolic link pointing at a local folder, for example C:\OctopusShared\ for the Artifacts, Packages, TaskLogs, Imports, and EventExports folders which need to be available to all nodes.

Before installing Octopus, follow the steps below on each Compute engine instance to mount your NFS share.

  1. Install NFS on the Windows VM

    On the Windows VM, open PowerShell as an administrator, and install the NFS client:

    # Install the NFS client
    Install-WindowsFeature -Name NFS-Client 

    Restart the Windows VM instance as prompted, then reconnect.

  2. Configure the user ID used by the NFS client

    In PowerShell, run the following commands to create two new registry entries, AnonymousUid and AnonymousGid:

    New-ItemProperty -Path "HKLM:\SOFTWARE\Microsoft\ClientForNFS\CurrentVersion\Default" `
     -Name "AnonymousUid" -Value "0" -PropertyType DWORD
    
    New-ItemProperty -Path "HKLM:\SOFTWARE\Microsoft\ClientForNFS\CurrentVersion\Default" `
     -Name "AnonymousGid" -Value "0" -PropertyType DWORD
  3. Restart the NFS client service

    nfsadmin client stop
    
    nfsadmin client start
  4. Create a batch file (.bat or .cmd) to mount the NFS share.

    net use X: \\your-nfs-share\share-name

    Substituting the placeholders with your own values:

    • X: with the mapped drive letter you want
    • your-nfs-share with either the host name or IP address of the Filestore instance
    • share-name with the Filestore instance share name
  5. Create a Windows Scheduled Task to run at system startup to mount the NFS share using the batch file.

    Below is an example scheduled task for mounting an NFS volume. Remember to substitute C:\OctoHA\MountNfsShare.cmd with the path to your batch file and ensure the task is set to run as LocalSystem.

    <?xml version="1.0" encoding="UTF-16"?>
    <Task version="1.4" xmlns="http://schemas.microsoft.com/windows/2004/02/mit/task">
      <RegistrationInfo>
        <URI>\OctopusDeploy - mount nfs volume</URI>
      </RegistrationInfo>
      <Triggers>
        <BootTrigger>
          <Enabled>true</Enabled>
        </BootTrigger>
      </Triggers>
      <Principals>
        <Principal id="Author">
          <UserId>S-1-5-18</UserId>
          <RunLevel>HighestAvailable</RunLevel>
        </Principal>
      </Principals>
      <Settings>
        <MultipleInstancesPolicy>IgnoreNew</MultipleInstancesPolicy>
        <DisallowStartIfOnBatteries>true</DisallowStartIfOnBatteries>
        <StopIfGoingOnBatteries>true</StopIfGoingOnBatteries>
        <AllowHardTerminate>true</AllowHardTerminate>
        <StartWhenAvailable>false</StartWhenAvailable>
        <RunOnlyIfNetworkAvailable>false</RunOnlyIfNetworkAvailable>
        <IdleSettings>
          <StopOnIdleEnd>true</StopOnIdleEnd>
          <RestartOnIdle>false</RestartOnIdle>
        </IdleSettings>
        <AllowStartOnDemand>true</AllowStartOnDemand>
        <Enabled>true</Enabled>
        <Hidden>true</Hidden>
        <RunOnlyIfIdle>false</RunOnlyIfIdle>
        <DisallowStartOnRemoteAppSession>false</DisallowStartOnRemoteAppSession>
        <UseUnifiedSchedulingEngine>true</UseUnifiedSchedulingEngine>
        <WakeToRun>false</WakeToRun>
        <ExecutionTimeLimit>PT1H</ExecutionTimeLimit>
        <Priority>5</Priority>
      </Settings>
      <Actions Context="Author">
        <Exec>
          <Command>C:\OctoHA\MountNfsShare.cmd</Command>
        </Exec>
      </Actions>
    </Task>

    You can add multiple Actions to a Scheduled task. If you want to be sure the NFS share is mounted before the Octopus Service is started, you can set the service Startup Type to Manual, and add the following command to run after the NFS share is mounted:

    C:\Program Files\Octopus Deploy\Octopus\Octopus.Server.exe" checkservices --instances OctopusServer
    <Exec>
      <Command>"C:\Program Files\Octopus Deploy\Octopus\Octopus.Server.exe"</Command>
      <Arguments>checkservices --instances OctopusServer</Arguments>
    </Exec>
    

    This is in effect the same when using the watchdog command to configure a scheduled task to monitor the Octopus Server service.

  6. Create folders in your NFS share for the Artifacts, Packages, TaskLogs, Imports, and EventExports.

  7. Create the symbolic links for the Artifacts, Packages, TaskLogs, Imports, and EventExports folders.

    Run the following PowerShell script, substituting the placeholder values with your own:

    # Create the local folder to use to create the symbolic links within.
    $LocalFolder="C:\OctopusShared"
    
    if (-not (Test-Path -Path $LocalFolder)) {
       New-Item -ItemType directory -Path $LocalFolder    
    }
    
    $NfsShare = "\\your-nfs-share\share-name"
    
    # Create symbolic links
    $ArtifactsFolder = Join-Path -Path $LocalFolder -ChildPath "Artifacts"
    if (-not (Test-Path -Path $ArtifactsFolder)) {
        New-Item -Path $ArtifactsFolder -ItemType SymbolicLink -Value "$NfsShare\Artifacts"
    }
    
    $PackagesFolder = Join-Path -Path $LocalFolder -ChildPath "Packages"
    if (-not (Test-Path -Path $PackagesFolder)) {
        New-Item -Path $PackagesFolder -ItemType SymbolicLink -Value "$NfsShare\Packages"
    }
    
    $TaskLogsFolder = Join-Path -Path $LocalFolder -ChildPath "TaskLogs"
    if (-not (Test-Path -Path $TaskLogsFolder)) {
        New-Item -Path $TaskLogsFolder -ItemType SymbolicLink -Value "$NfsShare\TaskLogs"
    }
    
    $ImportsFolder = Join-Path -Path $LocalFolder -ChildPath "Imports"
    if (-not (Test-Path -Path $ImportsFolder)) {
        New-Item -Path $ImportsFolder -ItemType SymbolicLink -Value "$NfsShare\Imports"
    }
    
    $EventExportsFolder = Join-Path -Path $LocalFolder -ChildPath "EventExports"
    if (-not (Test-Path -Path $EventExportsFolder)) {
        New-Item -Path $EventExportsFolder -ItemType SymbolicLink -Value "$NfsShare\EventExports"
    }

    Remember to create the folders in the NFS share before trying to create the symbolic links.

Once you’ve completed those steps, install Octopus and then when you’ve done that on all nodes, run the path command to change the paths to the shared storage:

& 'C:\Program Files\Octopus Deploy\Octopus\Octopus.Server.exe' path `
--artifacts "C:\OctopusShared\Artifacts" `
--nugetRepository "C:\OctopusShared\Packages" `
--taskLogs "C:\OctopusShared\TaskLogs" `
--imports "C:\OctopusShared\Imports" `
--eventExports "C:\OctopusShared\EventExports"

Changing the path only needs to be done once, and not on each node as the values are stored in the database.

Load balancing in Google Cloud

To distribute traffic to the Octopus web portal on multiple nodes, you need to use a load balancer. Google Cloud provides two options you should consider to distribute HTTP/HTTPS traffic to your Compute Engine instances.

If you are only using Listening Tentacles, we recommend using the HTTP(S) Load Balancer.

However, Polling Tentacles aren’t compatible with the HTTP(S) Load Balancer, so instead, we recommend using the Network Load Balancer. It allows you to configure TCP Forwarding rules on a specific port to each compute engine instance, which is one way to route traffic to each individual node as required for Polling Tentacles when running Octopus High Availability.

To use Network Load Balancers exclusively for Octopus High Availability with Polling Tentacles you’d potentially need to configure multiple load balancer(s) / forwarding rules:

  • One to serve the Octopus Web Portal HTTP traffic to your backend pool of Compute engine instances:

    Network Load Balancer for Web portal

  • One for each Compute engine instance for Polling Tentacles to connect to:

    Network Load Balancer for Polling Tentacles

With Network Load Balancers, you can configure a health check to ensure your Compute engine instances are healthy before traffic is served to them:

Network Load Balancer health check

Octopus Server provides a health check endpoint for your load balancer to ping: /api/octopusservernodes/ping.

Making a standard HTTP GET request to this URL on your Octopus Server nodes will return:

  • HTTP Status Code 200 OK as long as the Octopus Server node is online and not in drain mode.
  • HTTP Status Code 418 I'm a teapot when the Octopus Server node is online, but it is currently in drain mode preparing for maintenance.
  • Anything else indicates the Octopus Server node is offline, or something has gone wrong with this node.

The Octopus Server node configuration is also returned as JSON in the HTTP response body.

We typically recommend using a round-robin (or similar) approach for sharing traffic between the nodes in your cluster, as the Octopus Web Portal is stateless.

All package uploads are sent as a POST to the REST API endpoint /api/[SPACE-ID]/packages/raw. Because the REST API will be behind a load balancer, you’ll need to configure the following on the load balancer:

  • Timeout: Octopus is designed to handle 1 GB+ packages, which takes longer than the typical http/https timeout to upload.
  • Request Size: Octopus does not have a size limit on the request body for packages. Some load balancers only allow 2 or 3 MB files by default.

Polling Tentacles with HA

Listening Tentacles require no special configuration for Octopus High Availability. Polling Tentacles, however, poll a server at regular intervals to check if there are any tasks waiting for the Tentacle to perform. In a High Availability scenario Polling Tentacles must poll all of the Octopus Server nodes in your configuration.

Connecting Polling Tentacles

Whilst a Tentacle could poll a load balancer in an Octopus High Availability cluster, there is a risk, depending on your load balancer configuration, that the Tentacle will not poll all servers in a timely manner.

We recommend two options when configuring Polling Tentacles to connect to your Octopus High Availability cluster:

  • Using a unique address, and the same listening port (10943 by default) for each node.
  • Using the same address and a unique port for each node.

These are discussed further in the next sections.

Using a unique address

In this scenario, no load balancer is required. Instead, each Octopus node would be configured to listen on the same port (10943 by default) for inbound traffic. In addition, each node would be able to be reached directly by your Polling Tentacle on a unique address for the node.

For each node in your HA cluster:

  • Ensure the communication port Octopus listens on (10943 by default) is open, including any firewall.
  • Register the node with the Poll Server command line option. Specify the unique address for the node, including the listening port. For example, in a three-node cluster:
    • Node1 would use address: Octo1.domain.com:10943
    • Node2 would use address: Octo2.domain.com:10943
    • Node3 would use address: Octo3.domain.com:10943

The important thing to remember is that each node should be using a unique address and the same port.

Tip: A Polling Tentacle will connect to the Octopus Rest API over ports 80 or 443 when it is registering itself with the Octopus Server. After that, it will connect over port 10943 (by default) with the Octopus Server node.

It’s important to ensure that any firewalls also allow port 80 or 443 for the initial Tentacle registration.

Using a unique port

In this scenario, a type of Network Address Translation (NAT) is leveraged by using the same address and unique ports, usually routed through a load balancer or other network device. Each Octopus node would be configured to listen on a different port (starting at 10943 by default) for inbound traffic.

The advantage of using unique ports is that the Polling Tentacle doesn’t need to know each node’s address, only the port. The address translation is handled by the load balancer. This allows each node to have a private IP address, with no public access from outside your network required.

Imagine a three-node HA cluster. For each one, we expose a different port to listen on using the Octopus.Server configure command:

  • Node1 - Port 10943
  • Node2 - Port 10944
  • Node3 - Port 10945

Next on the load balancer, create Network Address Translation (NAT) rules and point them to each node in your HA Cluster:

  • Open port 10943 and route traffic to Node1 in your HA Cluster
  • Open port 10944 and route traffic to Node2 in your HA Cluster
  • Open port 10945 and route traffic to Node3 in your HA Cluster
  • Continue for any additional nodes in your HA cluster.

If you configured your nodes to use a different listening port, replace 10943-10945 with your port range.

The important thing to remember is that each node should be using the same address and a different port.

Registering Polling Tentacles

There are two options to add Octopus Servers to a Polling Tentacle, via the command-line or via editing the Tentacle.config file directly.

Command line:

Configuring a Polling Tentacle via the command-line is the preferred option with the command executed once per server; an example command using the default instance can be seen below:

C:\Program Files\Octopus Deploy\Tentacle>Tentacle poll-server --server=http://my.Octopus.server --apikey=API-77751F90F9EEDCEE0C0CD84F7A3CC726AD123FA6

For more information on this command please refer to the Tentacle Poll Server command line options.

Tentacle.config:

Alternatively you can edit Tentacle.config directly to add each Octopus Server (this is interpreted as a JSON array of servers). This method is not recommended as the Tentacle service for each server will need to be restarted to accept incoming connections via this method.

<set key="Tentacle.Communication.TrustedOctopusServers">
[
  {"Thumbprint":"77751F90F9EEDCEE0C0CD84F7A3CC726AD123FA6","CommunicationStyle":2,"Address":"https://10.0.255.160:10943","Squid":null,"SubscriptionId":"poll://g3662re9njtelsyfhm7t/"},
  {"Thumbprint":"77751F90F9EEDCEE0C0CD84F7A3CC726AD123FA6","CommunicationStyle":2,"Address":"https://10.0.255.161:10943","Squid":null,"SubscriptionId":"poll://g3662re9njtelsyfhm7t/"},
  {"Thumbprint":"77751F90F9EEDCEE0C0CD84F7A3CC726AD123FA6","CommunicationStyle":2,"Address":"https://10.0.255.162:10943","Squid":null,"SubscriptionId":"poll://g3662re9njtelsyfhm7t/"}
]
</set>

Notice there is an address entry for each Octopus Server in the High Availability configuration.

Help us continuously improve

Please let us know if you have any feedback about this page.

Send feedback

Page updated on Wednesday, October 4, 2023