Last updated

Workers are machines that can execute tasks that don't need to be run on the Octopus server or individual deployment targets.

Workers are useful for the following steps:

  • Publishing to Azure websites.
  • Deploying AWS CloudFormation templates.
  • Deploying to AWS Elastic Beanstalk.
  • Uploading files to Amazon S3.
  • Backing up databases.
  • Performing database schema migrations
  • Configuring load balancers.

Workers diagram

Built-in Worker

The Octopus server has an built-in worker that can deploy packages, execute scripts, and perform tasks that don't need to be performed on a deployment target. The built-in worker is configured by default, however, the built-in worker can be disabled by navigating to Configuration and selecting Disable fo the Run steps on the Octopus Server option.

Learn more about the built-in worker.

External Workers

An external worker is either a Tentacle or an SSH machine that has been registered with the Octopus server as a worker. The setup of a worker is the same as setting up a deployment target as a Windows Tentacle target or an SSH target, except that instead of being added to an environment, a worker is added to a worker pool.

Workers have machine policies, are health checked, and run Calamari, just like deployment targets.

Registering an External Worker

Once the Tentacle or SSH machine has been configured, workers can be added using the Web Portal, the Octopus Deploy REST API, the Octopus.Clients library or with the tentacle executable. Only a user with the ConfigureServer permission can add or edit workers.

Registering Workers in the Web Portal

  1. Navigate to Infrastructure ➜ Workers and click ADD WORKER.
  2. Select WINDOWS or SSH CONNECTION and click the card for the type of worker you want to configure.

You can choose between:

Register a Worker as a Listening Tentacle

Before you can configure your Windows servers as Tentacles, you need to install Tentacle Manager on the machines that you plan to use as Tentacles.

Tentacle Manager is the Windows application that configures your Tentacle. Once installed, you can access it from your start menu/start screen. Tentacle Manager can configure Tentacles to use a proxy, delete the Tentacle, and show diagnostic information about the Tentacle.

  1. Start the Tentacle installer, accept the license agreement, and follow the onscreen prompts.

  2. When the Octopus Deploy Tentacle Setup Wizard has completed, click Finish to exit the wizard.

  3. When the Tentacle Manager launches, click Get Started... and Next.

  4. Accept the default configuration and log directory and application directory or choose different locations and click Next.

  5. On the communication style screen, select Listening Tentacle and click Next.

  6. In the Octopus Web Portal, navigate to the Infrastructure tab, select Deployment Targets and click ADD DEPLOYMENT TARGET ➜ WINDOWS, and select Listening Tentacle.

  7. Copy the Thumbprint (the long alphanumerical string).

  8. Back on the Tentacle server, accept the default listening port 10933 and paste the Thumbprint into the Octopus Thumbprint field and click Next.

  9. Click INSTALL, and after the installation has finished click Finish.

  10. Back in the Octopus Web Portal, enter the DNS or IP address of the machine the Tentacle is installed on, i.e., or, and click NEXT.

  11. Add a display name for the deployment target (the server where you just installed the Listening Tentacle).

  12. Select which worker pool the deployment target will be assigned to and click SAVE.

After you have saved the new worker, you can navigate to the worker pool you assigned the worker to, to view its status.

Register a Worker as a Polling Tentacle

It is not currently possible to configure a worker as a Polling Tentacle with the Tentacle Manager, please Registering Workers with the Tentacle Executable.

Register a Worker with an SSH Connection

To register a worker with an SSH Connection, see the instructions for configuring an SSH connection.

Registering Workers with the Tentacle Executable

Tentacle workers can also register with the server using the Tentacle executable (version 3.22.0 or later), for example:

.\Tentacle.exe register-worker --instance MyInstance --server "" --comms-style TentaclePassive --apikey "API-CS0SW5SQJNLUBQCUBPK8LZY3KYO" --workerpool "Default Worker Pool"

Use TentacleActive instead of TentaclePassive to register a polling Tentacle worker.

The Tentacle executable can also be used to deregister workers, for example:

.\Tentacle.exe deregister-worker --instance MyInstance --server "" --apikey "API-CS0SW5SQJNLUBQCUBPK8LZY3KYO"

Recommendations for External Workers

We highly recommend setting up external workers on a different machine to the Octopus Server.

We also recommend running external workers as a different user account to the Octopus Server.

It can be advantageous to have workers on the same local network as the server to reduce package transfer times.

Default pools attached to cloud targets allow co-location of workers and targets, this can help make workers specific to your targets as well as making the Octopus Server more secure by using external workers.

Multiple Projects Run Simultaneously on Workers

Many workers may be running in parallel and a single worker can run multiple actions in parallel.

The task cap determines how many tasks (deployments or system tasks) can run simultaneously. The Octopus System Variable Octopus.Action.MaxParallelism controls how much parallelism is allowed in executing a deployment action. It applies the same to deployment targets as it does to workers. For example, if Octopus.Action.MaxParallelism is at its default value of 10, any one deployment action will being deploying to at most 10 deployment targets simultaneously, or have no more than 10 concurrent worker invocations running. Parallel steps in a deployment can each reach their own MaxParallelism. Coupled with multiple deployment tasks running, up to the task cap, you can see the number of concurrent worker invocations can grow quickly.

External workers and the built-in worker have the same behavior in this regard and in that Workers can run many actions simultaneously and can run actions from different projects simultaneously. Note that this means the execution of an action doesn't have exclusive access to a worker, which could allow one project to access the working folder of another project.

Note that if external workers are added to the default pool, then the workload is shared across those workers: a single external worker will be asked to perform exactly the same load as the built-in worker would have been doing, two workers might get half each, etc.

Workers in HA Setups

In an HA Octopus setup, each node has a task cap and can invoke a built-in worker locally, so for a 4-node HA cluster, there are 4 built-in workers. Therefore if you move to external workers, it's likely you'll need to provision workers to at least match your server nodes, otherwise, you'll be asking each worker to do the sum of what the HA nodes were previously doing.