Maintenance Window Instructions for instances with encrypted volumes attached

During Maintenance Windows we need to evacuate VMs out of affected computes. Since instances with encrypted volumes are not applicable for live migration by default (as CityNetwork is not explicitly allowed to access key with which volume is being encrypted), we provide several options for VM evacuations from which you can pick.

  1. You can share access with our admin user through Barbican ACL so that we completed live migration during MW
  2. When announcing MW we provide a timeframe during which you can manually "offline migrate" VMs (shelve/unshelve)
  3. We forcingly stop VMs with 30 min notice after the timeframe for manual migration passed. In this case, user should start VM manually once it migrated.

We will describe all these options in more detail. 

Sharing secret with ACL


This option supposes that you trust us to read the secret with which volume is encrypted so that we can provide live migration of their VM during maintenances. With announcing MW we will share user ID that would need to have access. This is a unique user per region, that is used during MWs. You can revoke ACL once MW is finished. List of user ID per region can be found on page: List of MW user UUIDs Below we provide instructions on how to do that.


Let's assume the following encrypted volume has been created:

~$ openstack volume show ed466d1a-0eab-4213-8ac5-ddc07c75ef06
+------------------------------+--------------------------------------+
| Field                        | Value                                |
+------------------------------+--------------------------------------+
| attachments                  | []                                   |
| availability_zone            | nova                                 |
| bootable                     | false                                |
| consistencygroup_id          | None                                 |
| created_at                   | 2022-02-11T17:11:02.000000           |
| description                  | None                                 |
| encrypted                    | True                                 |
| id                           | ed466d1a-0eab-4213-8ac5-ddc07c75ef06 |
| multiattach                  | False                                |
| name                         | test                                 |
| os-vol-tenant-attr:tenant_id | 607291ca3ff546218b0701b5f5c6ede4     |
| properties                   |                                      |
| replication_status           | None                                 |
| size                         | 10                                   |
| snapshot_id                  | None                                 |
| source_volid                 | None                                 |
| status                       | available                            |
| type                         | volumes_hdd_encrypted                |
| updated_at                   | 2022-02-11T17:11:30.000000           |
| user_id                      | 966ad341f4e14920b5f589f900246ccc     |
+------------------------------+--------------------------------------+


  1. Identify key for volume

    This requires Cinder API version 3.66 or higher, which is provided for Wallaby or later releases.

    ~$ key_uuid=$(openstack volume show --os-volume-api-version 3.66 -c encryption_key_id ed466d1a-0eab-4213-8ac5-ddc07c75ef06 -f value -c Value
  2. Create ACL rule for our admin user

    ~$ openstack acl user add --user <user_id> --operation-type read https://${OS_REGION_NAME}.citycloud.com:9311/v1/secrets/$key_uuid
    +----------------+----------------+---------------+---------------------------+---------------------------+---------------------------------------------------------------------------------+
    | Operation Type | Project Access | Users         | Created                   | Updated                   | Secret ACL Ref                                                                  |
    +----------------+----------------+---------------+---------------------------+---------------------------+---------------------------------------------------------------------------------+
    | read           | True           | ['<user_id>'] | 2022-02-11T18:47:16+00:00 | 2022-02-11T18:47:16+00:00 | https://<region>.citycloud.com:9311/v1/secrets/79204ec5-1863-494a-9f84-0c88b8faed1c/acl |
    +----------------+----------------+---------------+---------------------------+---------------------------+---------------------------------------------------------------------------------+
    ~$ 
  3. Revoke ACL

    ~$ openstack acl user remove --user <user_id> --operation-type read https://${OS_REGION_NAME}.citycloud.com:9311/v1/secrets/$key_uuid
    +----------------+----------------+-------+---------------------------+---------------------------+-----------------------------------------------------------------------------------------+
    | Operation Type | Project Access | Users | Created                   | Updated                   | Secret ACL Ref                                                                          |
    +----------------+----------------+-------+---------------------------+---------------------------+-----------------------------------------------------------------------------------------+
    | read           | True           | []    | 2022-02-11T18:47:16+00:00 | 2022-02-13T09:58:44+00:00 | https://<region>.citycloud.com:9311/v1/secrets/79204ec5-1863-494a-9f84-0c88b8faed1c/acl |
    +----------------+----------------+-------+---------------------------+---------------------------+-----------------------------------------------------------------------------------------+
    ~$ 

Moving VM out of the affected node

With announcing MW for the region you will be provided with a timeframe, when you can compete VM re-scheduling to unaffected and already upgraded compute nodes. During this period you can follow the instructions provided below. Eventually, this will result in offline migration of the affected instance.

Be aware that for ephemeral-based instances new glance image will be created during shelve, and a new instance will be spawned from that image during unshelve.


  1. Shelve server

    ~$ openstack server shelve <server-uuid>
  2. Verify server status

    ~$ openstack server show <server-uuid> -c status
    +--------+-------------------+
    | Field  | Value             |
    +--------+-------------------+
    | status | SHELVED_OFFLOADED |
    +--------+-------------------+
    ~$
  3. Unshelve server

    ~$ openstack server unshelve <server-uuid>
  4. Verify server status

    ~$ openstack server show <server-uuid> -c status
    +--------+--------+
    | Field  | Value  |
    +--------+--------+
    | status | ACTIVE |
    +--------+--------+
    ~$

Manual server boot

If any of the mentioned below options are not possible for some reason, then during MW we will have to shutdown and cold-migrate VM from the affected node. However, since we don't have access to your encryption key, you will need to boot VM manually afterward. It will remain powered off until you take action.

In case of force cold-migration (if VM was not moved and secret was not shared) multiple VMs in the same anti-affinity group can be brought down at the same time. We highly recommend considering previous options for HA setups that rely on anti-affinity.


In 30 minutes before powering down your VM, we will update you about that, so that you could boot it up as soon as possible. To power on VM with CLI command, you need to run

~$ openstack server start <server-uuid>