To reduce cluster start time, you can designate predefined pools of idle instances to create worker nodes and the driver node. This is also called attaching the cluster to the pools. The cluster is created using instances in the pools. If a pool does not have sufficient idle resources to create the requested driver node or worker nodes, the pool expands by allocating new instances from the instance provider. When the cluster is terminated, the instances it used are returned to the pool and can be reused by a different cluster.
You can attach a different pool for the driver node and worker nodes, or attach the same pool for both.
You must use a pool for both the driver node and worker nodes, or for neither. Otherwise, an error occurs and your cluster isn’t created. This prevents a situation where the driver node has to wait for worker nodes to be created, or vice versa.
You must have permission to attach to each pool; see Pool access control.
To attach a cluster to a pool using the cluster creation UI, select the pool from the Driver Type or Worker Type dropdown when you configure the cluster. Available pools are listed at the top of each dropdown list. You can use the same pool or different pools for the driver node and worker nodes.
If you use the Clusters API, you must specify
driver_instance_pool_id for the driver node and
instance_pool_id for the worker nodes.
When you attach a cluster to a pool, the following configuration properties are inherited from the pool:
Custom Cluster Tags: You can add additional custom tags for the cluster, and both the cluster-level tags and those inherited from pools are applied. You cannot add a cluster-specific custom tag with the same key name as a custom tag inherited from a pool (that is, you cannot override a custom tag that is inherited from the pool).