- Bring up undercloud director
- Prepare Undercloud for overcloud deployment
- Phases of overcloud node deployment
- Registering an overcloud node
- Configure Virtual Bare Metal Controller (VBMC)
- Configuration Files
- Register Nodes
- Introspection of overcloud nodes
- Check the provisioning state
- Flavor Details
- Check the property of the ironic nodes
- Tagging node into profiles
- Heat Templates
- Overcloud Deployment
Bring up undercloud director
Prepare Undercloud for overcloud deployment
You can also configure a cluster for more than one ceph-nodes
.
Phases of overcloud node deployment
Registration:
- The "stack" user uploads information about the proposed overcloud nodes
- The information includes credentials for power management
- The information is saved in the ironic database and used during the introspection phase.
Introspection:
- Ironic connects to the registered nodes to gather more details about the hardware resources.
- The discovery kernel and ramdisk images are used during this process.
Deployment:
- The "stack" user deploys the overcloud nodes, allocating resources and nodes that were discovered during the introspection phase.
- Hardware profiles and Heat templates are used during the phase.
Registering an overcloud node
This consists of adding it to an ironic list for the possible nodes for the overcloud. The undercloud needs the following information to register a node.
- The type of power management being used (such as IPMI or PXE over SHH). The various power management drivers supported by iroic can be listed using "ironic driver -list".
- The node's IP address on the power management network.
- The credentials to be used for the power management interface.
- The MAC address for the NIC on the PXE/provisioning network.
- The kernel and ramdisk that will be used for introspection.
All of this information can be passed using a JSON file or using a CSV file. The "openstack baremetal import" command imports this file into the ironic database.
Configure Virtual Bare Metal Controller (VBMC)
The director can use virtual machines as nodes on a KVM host. It controls their power management through emulated IPMI devices. SInce we have a lab setup using KVM as my setup we will use VBMC to help register the nodes.
Since we are using virtual machines for our setup which does not has any iLO or similar utlity for power management we will use VBMC.
Enable the below repo on your "KVM Host" to get the vbmc package.
https://git.openstack.org/cgit/openstack/virtualbmc
Create a virtual bare metal controller (BMC) for each virtual machine using the vbmc command.
[root@openstack ~]# vbmc add overcloud-controller.example --port 6321 --username admin --password redhat
[root@openstack ~]# vbmc add overcloud-compute.example --port 6322 --username admin --password redhat
To list the available domains
+------------------------------+--------+---------+------+
| Domain name | Status | Address | Port |
+------------------------------+--------+---------+------+
| overcloud-ceph.example | down | :: | 6320 |
| overcloud-compute.example | down | :: | 6322 |
| overcloud-controller.example | down | :: | 6321 |
+------------------------------+--------+---------+------+
Next start all the virtual BMCs:
[root@openstack ~]# vbmc start overcloud-compute.example
[root@openstack ~]# vbmc start overcloud-controller
Check the status again
+------------------------------+---------+---------+------+
| Domain name | Status | Address | Port |
+------------------------------+---------+---------+------+
| overcloud-ceph.example | running | :: | 6320 |
| overcloud-compute.example | running | :: | 6322 |
| overcloud-controller.example | running | :: | 6321 |
+------------------------------+---------+---------+------+
Now all our domains are in running state.
To get the list of supported drivers for IMPI connection
+---------------------+-----------------------+
| Supported driver(s) | Active host(s) |
+---------------------+-----------------------+
| pxe_drac | localhost.localdomain |
| pxe_ilo | localhost.localdomain |
| pxe_ipmitool | localhost.localdomain |
| pxe_ssh | localhost.localdomain |
+---------------------+-----------------------+
To check the power status of all our virtual hosts to make sure they are reachable from our undercloud.
Chassis Power is off
[stack@undercloud-director ~]$ ipmitool -I lanplus -H 10.43.138.12 -L ADMINISTRATOR -p 6321 -U admin -R 3 -N 5 -P redhat power status
Chassis Power is off
[stack@undercloud-director ~]$ ipmitool -I lanplus -H 10.43.138.12 -L ADMINISTRATOR -p 6320 -U admin -R 3 -N 5 -P redhat power status
Chassis Power is off
Configuration Files
Create a JSON file describing your Overcloud baremetal nodes, call it "instack-threenodes.json" and place in your home directory. The file should contain a JSON object with the only field nodes containing list of node descriptions.
Each node description should contains required fields:
- pm_type - driver for Ironic nodes, see Ironic Hardware Types for details
- pm_addr - node BMC IP address (hypervisor address in case of virtual environment)
- pm_user, pm_password - node BMC credentials
These credentials will be used to control the power of the overcloud hypervisors.
Below is my input json file which I will use to import the nodes into the ironic database
{
"nodes":[
{
"mac":[
"52:54:00:87:37:1f"
],
"name":"overcloud-controller.example",
"cpu":"4",
"memory":"10240",
"disk":"50",
"arch":"x86_64",
"pm_type":"pxe_ipmitool",
"pm_user":"admin",
"pm_addr": "10.43.138.12",
"pm_password": "redhat",
"pm_port": "6321"
},
{
"mac":[
"52:54:00:64:36:c6"
],
"name":"overcloud-compute.example",
"cpu":"4",
"memory":"10240",
"disk":"50",
"arch":"x86_64",
"pm_type":"pxe_ipmitool",
"pm_user":"admin",
"pm_addr": "10.43.138.12",
"pm_password": "redhat",
"pm_port": "6322"
},
{
"mac":[
"52:54:00:6f:3f:47"
],
"name":"overcloud-ceph.example",
"cpu":"4",
"memory":"20240",
"disk":"50",
"arch":"x86_64",
"pm_type":"pxe_ipmitool",
"pm_user":"admin",
"pm_addr": "10.43.138.12",
"pm_port": "6320",
"pm_password": "redhat"
}
]
}
Register Nodes
Register and configure nodes for your deployment with Ironic:
Started Mistral Workflow. Execution ID: 3c5dc807-b797-4b09-b6c1-3f6902cd4a26
Successfully registered node UUID 7995f1f2-4af7-4c5d-9099-fc928c4c73b3
Successfully registered node UUID 7995f1f2-4af7-4c5d-9099-fc928c4c73b3
Successfully registered node UUID 7c84cdf2-c5b2-47fb-a741-30c025b54183
Started Mistral Workflow. Execution ID: 62e4c913-a15e-438f-bedd-5648e2ba1aa0
Successfully set all nodes to available.
Introspection of overcloud nodes
For the introspection/discovery of overcloud nodes. Ironic uses PXE provided by the undercloud. The "dnsmasq" is used to provide DHCP and PXE capabilities to the ironic service. The PXE directory images are delivered over HTTP. Prior to introspection, the registered nodes have a valid kernel and ramdisk assigned to them and every node for introspection should have:
- Power State should be power off
- Provision State should be available
- Maintenance should be False
- Instance UUID likely set to None.
Use below command to make sure our nodes meet the above requirement
+--------------------------------------+------------------------------+---------------+-------------+--------------------+-------------+
| UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance |
+--------------------------------------+------------------------------+---------------+-------------+--------------------+-------------+
| ece1651a-6adc-4826-9f77-5d47891c6c9b | overcloud-controller.example | None | power off | available | False |
| d19a1bce-3792-428e-b242-fab2bab6213d | overcloud-compute.example | None | power off | available | False |
| 74d4151f-03b5-4c6a-badc-d2cbf6bba7af | overcloud-ceph.example | None | power off | available | False |
+--------------------------------------+------------------------------+---------------+-------------+--------------------+-------------+
The "openstack baremetal introspection" command is used to start the introspection and "bulk start" can be used to proceed with introspection of all nodes. The two nodes that will be checked are the controller and compute nodes
| driver | pxe_ipmitool |
| driver_info | {u'ipmi_port': u'6321', u'ipmi_username': u'admin', u'deploy_kernel': |
| | u'e16bd471-2bab-4bef-9e5f-c2b88def647f', u'ipmi_address': |
| | u'10.43.138.12', u'deploy_ramdisk': u'fdca9579-5a68-42a4-9ebf- |
| | fe6b605e6ae5', u'ipmi_password': u'******'} |
| driver | pxe_ipmitool |
| driver_info | {u'ipmi_port': u'6322', u'ipmi_username': u'admin', u'deploy_kernel': |
| | u'e16bd471-2bab-4bef-9e5f-c2b88def647f', u'ipmi_address': |
| | u'10.43.138.12', u'deploy_ramdisk': u'fdca9579-5a68-42a4-9ebf- |
| | fe6b605e6ae5', u'ipmi_password': u'******'} |
| driver | pxe_ipmitool |
| driver_info | {u'ipmi_port': u'6320', u'ipmi_username': u'admin', u'deploy_kernel': |
| | u'e16bd471-2bab-4bef-9e5f-c2b88def647f', u'ipmi_address': |
| | u'10.43.138.12', u'deploy_ramdisk': u'fdca9579-5a68-42a4-9ebf- |
| | fe6b605e6ae5', u'ipmi_password': u'******'} |
Change the provision state of the nodes to "manageable"
Check the provisioning state
+--------------------------------------+------------------------------+---------------+-------------+--------------------+-------------+
| UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance |
+--------------------------------------+------------------------------+---------------+-------------+--------------------+-------------+
| ece1651a-6adc-4826-9f77-5d47891c6c9b | overcloud-controller.example | None | power off | manageable | False |
| d19a1bce-3792-428e-b242-fab2bab6213d | overcloud-compute.example | None | power off | manageable | False |
| 74d4151f-03b5-4c6a-badc-d2cbf6bba7af | overcloud-ceph.example | None | power off | manageable | False |
+--------------------------------------+------------------------------+---------------+-------------+--------------------+-------------+
Run the following command to inspect the hardware attributes of individual nodes:
Started Mistral Workflow. Execution ID: 1fc5064a-40c9-4471-9ca4-36577474f4ae
Waiting for introspection to finish...
Successfully introspected all nodes.
Introspection completed.
Started Mistral Workflow. Execution ID: 790bd02a-514f-4781-90e5-269b250876f8
Successfully set all nodes to available.
Started Mistral Workflow. Execution ID: d3c2e58a-54ab-48d8-87d8-58b84e5f6b7e
Waiting for introspection to finish...
Successfully introspected all nodes.
Introspection completed.
Started Mistral Workflow. Execution ID: 90ec9f5b-da54-4e32-a5fc-52c9e89b65e9
Successfully set all nodes to available.
Started Mistral Workflow. Execution ID: 04b8ec73-6440-432d-bcbf-72c6ee4c554a
Waiting for introspection to finish...
Successfully introspected all nodes.
Introspection completed.
Started Mistral Workflow. Execution ID: fa832937-b473-47ff-99ca-574ae2dd0bd1
Successfully set all nodes to available.
You can check the progress of the introspection using below comman from a different terminal
OR
You can monitor the console of the VM to check the progress of the introspection
and re-attempt the introspect again
To check the exit status of introspection
ece1651a-6adc-4826-9f77-5d47891c6c9b
+----------+-------+
| Field | Value |
+----------+-------+
| error | None |
| finished | True |
+----------+-------+
d19a1bce-3792-428e-b242-fab2bab6213d
+----------+-------+
| Field | Value |
+----------+-------+
| error | None |
| finished | True |
+----------+-------+
74d4151f-03b5-4c6a-badc-d2cbf6bba7af
+----------+-------+
| Field | Value |
+----------+-------+
| error | None |
| finished | True |
+----------+-------+
To get the disk related inventory from the introspection data
[
{
"size": 53687091200,
"rotational": true,
"vendor": "0x1af4",
"name": "/dev/vda",
"wwn_vendor_extension": null,
"wwn_with_extension": null,
"model": "",
"wwn": null,
"serial": null
}
]
HINT: This data can be used to assign root disks if required.
Once the introspection is completed you can check the resources calculated by the nova at the introspection stage for all the 3 hypervisors
+----------------------+-------+
| Field | Value |
+----------------------+-------+
| count | 3 |
| current_workload | 0 |
| disk_available_least | 117 |
| free_disk_gb | 117 |
| free_ram_mb | 30720 |
| local_gb | 117 |
| local_gb_used | 0 |
| memory_mb | 30720 |
| memory_mb_used | 0 |
| running_vms | 0 |
| vcpus | 12 |
| vcpus_used | 0 |
+----------------------+-------+
Flavor Details
The undercloud will have a number of default flavors created at install time. In most cases these flavors do not need to be modified, but they can be if desired. By default, all overcloud instances will be booted with the baremetal flavor, so all baremetal nodes must have at least as much memory, disk, and cpu as that flavor.
In addition, there are profile-specific flavors created which can be used with the profile-matching feature
+--------------------------------------+---------------+-------+------+-----------+-------+-----------+
| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
+--------------------------------------+---------------+-------+------+-----------+-------+-----------+
| 30f44bc0-cad4-48a3-8b99-195ac0ccdb71 | ceph-storage | 4096 | 40 | 0 | 2 | True |
| 33ca9562-a9d5-457e-b4af-98188c6eef5c | compute | 4096 | 40 | 0 | 2 | True |
| 57cb3b2c-5d07-4aba-8606-fb9a5b033351 | baremetal | 4096 | 40 | 0 | 2 | True |
| 5883f1c8-8d56-42b3-b1a1-45e3e2957314 | block-storage | 4096 | 40 | 0 | 2 | True |
| 9989fb29-5796-4597-98e1-efe981358659 | swift-storage | 4096 | 40 | 0 | 2 | True |
| ecf3a3ff-a885-4bad-a05d-cba8f0125498 | control | 4096 | 40 | 0 | 2 | True |
+--------------------------------------+---------------+-------+------+-----------+-------+-----------+
Check the property of the ironic nodes
ece1651a-6adc-4826-9f77-5d47891c6c9b
+------------+-------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------+-------------------------------------------------------------------------------------------------------------------------------------------------+
| properties | {u'memory_mb': u'10240', u'cpu_arch': u'x86_64', u'local_gb': u'49', u'cpus': u'4', u'capabilities': |
| | u'cpu_aes:true,cpu_hugepages:true,boot_option:local'} |
+------------+-------------------------------------------------------------------------------------------------------------------------------------------------+
d19a1bce-3792-428e-b242-fab2bab6213d
+------------+-------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------+-------------------------------------------------------------------------------------------------------------------------------------------------+
| properties | {u'memory_mb': u'10240', u'cpu_arch': u'x86_64', u'local_gb': u'49', u'cpus': u'4', u'capabilities': |
| | u'cpu_aes:true,cpu_hugepages:true,boot_option:local'} |
+------------+-------------------------------------------------------------------------------------------------------------------------------------------------+
74d4151f-03b5-4c6a-badc-d2cbf6bba7af
+------------+-------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------+-------------------------------------------------------------------------------------------------------------------------------------------------+
| properties | {u'memory_mb': u'10240', u'cpu_arch': u'x86_64', u'local_gb': u'19', u'cpus': u'4', u'capabilities': |
| | u'cpu_aes:true,cpu_hugepages:true,boot_option:local'} |
+------------+-------------------------------------------------------------------------------------------------------------------------------------------------+
Follow below article for more detail:
How to fix "NoValidHost: No valid host was found. There are not enough hosts available" during overcloud deployment (openstack)
Modify your flavors to add the capabilities similar to as the ironic nodes
[stack@undercloud-director ~]$ openstack flavor set --property "capabilities:profile"="compute" --property "capabilities:cpu_aes"="true" --property "capabilities:cpu_hugepages"="true" --property "capabilities:boot_option"="local" compute
[stack@undercloud-director ~]$ openstack flavor set --property "capabilities:profile"="ceph-storage" --property "capabilities:cpu_aes"="true" --property "capabilities:cpu_hugepages"="true" --property "capabilities:boot_option"="local" ceph-storage
Check the properties filed of the flavors again which we plan to use with our ironic nodes
+------------+-------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------+-------------------------------------------------------------------------------------------------------------------------------------------------+
| properties | capabilities:boot_option='local', capabilities:cpu_aes='true', capabilities:cpu_hugepages='true', capabilities:profile='compute', |
| | cpu_arch='x86_64' |
+------------+-------------------------------------------------------------------------------------------------------------------------------------------------+
[stack@undercloud-director ~]$ openstack flavor show control -c properties
+------------+-------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------+-------------------------------------------------------------------------------------------------------------------------------------------------+
| properties | capabilities:boot_option='local', capabilities:cpu_aes='true', capabilities:cpu_hugepages='true', capabilities:profile='control', |
| | cpu_arch='x86_64' |
+------------+-------------------------------------------------------------------------------------------------------------------------------------------------+
[stack@undercloud-director ~]$ openstack flavor show ceph-storage -c properties
+------------+---------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------+---------------------------------------------------------------------------------------------------------------------------------------+
| properties | capabilities:boot_option='local', capabilities:cpu_aes='true', capabilities:cpu_hugepages='true', capabilities:profile='ceph-storage' |
+------------+---------------------------------------------------------------------------------------------------------------------------------------+
Tagging node into profiles
After registering and inspecting the hardware of each node, you will tag them into specific profiles. These profile tags match your nodes to flavors, and in turn the flavors are assigned to a deployment role. Default profile flavors compute, control, swift-storage, ceph-storage, and block-storage are created during Undercloud installation and are usable without modification in most environments.
Currently there is no profile assigned to any of the nodes
+--------------------------------------+------------------------------+-----------------+-----------------+-------------------+
| Node UUID | Node Name | Provision State | Current Profile | Possible Profiles |
+--------------------------------------+------------------------------+-----------------+-----------------+-------------------+
| ece1651a-6adc-4826-9f77-5d47891c6c9b | overcloud-controller.example | available | None | |
| d19a1bce-3792-428e-b242-fab2bab6213d | overcloud-compute.example | available | None | |
| 74d4151f-03b5-4c6a-badc-d2cbf6bba7af | overcloud-ceph.example | available | None | |
+--------------------------------------+------------------------------+-----------------+-----------------+-------------------+
Make sure you modify the "properties/capabilities" as per your node's properties and add the profile
+------------------------+-----------------------------------------------------------------------+
| Property | Value |
+------------------------+-----------------------------------------------------------------------+
| chassis_uuid | |
| clean_step | {} |
| console_enabled | False |
| created_at | 2018-08-14T05:45:36+00:00 |
| driver | pxe_ipmitool |
| driver_info | {u'ipmi_port': u'6322', u'ipmi_username': u'admin', u'deploy_kernel': |
| | u'e16bd471-2bab-4bef-9e5f-c2b88def647f', u'ipmi_address': |
| | u'10.43.138.12', u'deploy_ramdisk': u'fdca9579-5a68-42a4-9ebf- |
| | fe6b605e6ae5', u'ipmi_password': u'******'} |
| driver_internal_info | {} |
| extra | {u'hardware_swift_object': u'extra_hardware- |
| | d19a1bce-3792-428e-b242-fab2bab6213d'} |
| inspection_finished_at | None |
| inspection_started_at | None |
| instance_info | {} |
| instance_uuid | None |
| last_error | None |
| maintenance | False |
| maintenance_reason | None |
| name | overcloud-compute.example |
| network_interface | |
| power_state | power off |
| properties | {u'memory_mb': u'10240', u'cpu_arch': u'x86_64', u'local_gb': u'49', |
| | u'cpus': u'4', u'capabilities': |
| | u'profile:compute,cpu_aes:true,cpu_hugepages:true,boot_option:local'} |
| provision_state | available |
| provision_updated_at | 2018-08-14T05:55:10+00:00 |
| raid_config | |
| reservation | None |
| resource_class | |
| target_power_state | None |
| target_provision_state | None |
| target_raid_config | |
| updated_at | 2018-08-14T05:55:16+00:00 |
| uuid | d19a1bce-3792-428e-b242-fab2bab6213d |
+------------------------+-----------------------------------------------------------------------+
+------------------------+-----------------------------------------------------------------------+
| Property | Value |
+------------------------+-----------------------------------------------------------------------+
| chassis_uuid | |
| clean_step | {} |
| console_enabled | False |
| created_at | 2018-08-14T05:45:36+00:00 |
| driver | pxe_ipmitool |
| driver_info | {u'ipmi_port': u'6321', u'ipmi_username': u'admin', u'deploy_kernel': |
| | u'e16bd471-2bab-4bef-9e5f-c2b88def647f', u'ipmi_address': |
| | u'10.43.138.12', u'deploy_ramdisk': u'fdca9579-5a68-42a4-9ebf- |
| | fe6b605e6ae5', u'ipmi_password': u'******'} |
| driver_internal_info | {} |
| extra | {u'hardware_swift_object': u'extra_hardware-ece1651a- |
| | 6adc-4826-9f77-5d47891c6c9b'} |
| inspection_finished_at | None |
| inspection_started_at | None |
| instance_info | {} |
| instance_uuid | None |
| last_error | None |
| maintenance | False |
| maintenance_reason | None |
| name | overcloud-controller.example |
| network_interface | |
| power_state | power off |
| properties | {u'memory_mb': u'10240', u'cpu_arch': u'x86_64', u'local_gb': u'49', |
| | u'cpus': u'4', u'capabilities': |
| | u'profile:control,cpu_aes:true,cpu_hugepages:true,boot_option:local'} |
| provision_state | available |
| provision_updated_at | 2018-08-14T05:52:50+00:00 |
| raid_config | |
| reservation | None |
| resource_class | |
| target_power_state | None |
| target_provision_state | None |
| target_raid_config | |
| updated_at | 2018-08-14T05:52:59+00:00 |
| uuid | ece1651a-6adc-4826-9f77-5d47891c6c9b |
+------------------------+-----------------------------------------------------------------------+
+------------------------+-----------------------------------------------------------------------+
| Property | Value |
+------------------------+-----------------------------------------------------------------------+
| chassis_uuid | |
| clean_step | {} |
| console_enabled | False |
| created_at | 2018-08-14T05:45:36+00:00 |
| driver | pxe_ipmitool |
| driver_info | {u'ipmi_port': u'6320', u'ipmi_username': u'admin', u'deploy_kernel': |
| | u'e16bd471-2bab-4bef-9e5f-c2b88def647f', u'ipmi_address': |
| | u'10.43.138.12', u'deploy_ramdisk': u'fdca9579-5a68-42a4-9ebf- |
| | fe6b605e6ae5', u'ipmi_password': u'******'} |
| driver_internal_info | {} |
| extra | {u'hardware_swift_object': u'extra_hardware-74d4151f-03b5-4c6a-badc- |
| | d2cbf6bba7af'} |
| inspection_finished_at | None |
| inspection_started_at | None |
| instance_info | {} |
| instance_uuid | None |
| last_error | None |
| maintenance | False |
| maintenance_reason | None |
| name | overcloud-ceph.example |
| network_interface | |
| power_state | power off |
| properties | {u'memory_mb': u'10240', u'cpu_arch': u'x86_64', u'local_gb': u'49', |
| | u'cpus': u'4', u'capabilities': u'profile:ceph- |
| | storage,cpu_aes:true,cpu_hugepages:true,boot_option:local'} |
| provision_state | available |
| provision_updated_at | 2018-08-14T05:57:50+00:00 |
| raid_config | |
| reservation | None |
| resource_class | |
| target_power_state | None |
| target_provision_state | None |
| target_raid_config | |
| updated_at | 2018-08-14T05:57:59+00:00 |
| uuid | 74d4151f-03b5-4c6a-badc-d2cbf6bba7af |
+------------------------+-----------------------------------------------------------------------+
Once all the above commands are executed sucessfully, re-check the profile assignment. Here we see respective profiles are assigned to the nodes
+--------------------------------------+------------------------------+-----------------+-----------------+-------------------+
| Node UUID | Node Name | Provision State | Current Profile | Possible Profiles |
+--------------------------------------+------------------------------+-----------------+-----------------+-------------------+
| ece1651a-6adc-4826-9f77-5d47891c6c9b | overcloud-controller.example | available | control | |
| d19a1bce-3792-428e-b242-fab2bab6213d | overcloud-compute.example | available | compute | |
| 74d4151f-03b5-4c6a-badc-d2cbf6bba7af | overcloud-ceph.example | available | ceph-storage | |
+--------------------------------------+------------------------------+-----------------+-----------------+-------------------+
Heat Templates
The director uses Heat Orchestration Templates (HOT) as a template format for its Overcloud deployment plan. Templates in HOT format are mostly expressed in YAML format. The purpose of a template is to define and create a stack, which is a collection of resources that Heat creates and the configuration per resources. Resources are objects in OpenStack and can include compute resources, network configuration, security groups, scaling rules, and custom resources.
The structure of a Heat template has three main sections:
Parameters - These are settings passed to Heat, which provides a way to customize a stack, and any default values for parameters without passed values. These are defined in the parameters section of a template.
Resources - These are the specific objects to create and configure as part of a stack. OpenStack contains a set of core resources that span across all components. These are defined in the resources section of a template.
Output - These are values passed from Heat after the stack's creation. You can access these values either through the Heat API or client tools. These are defined in the output section of a template.
Copy the default templates
[stack@undercloud-director ~]$ cd templates/
total 160
-rw-r--r--. 1 stack stack 1039 Aug 13 11:15 all-nodes-validation.yaml
-rw-r--r--. 1 stack stack 583 Aug 13 11:15 bootstrap-config.yaml
-rw-r--r--. 1 stack stack 21256 Aug 13 11:15 capabilities-map.yaml
drwxr-xr-x. 5 stack stack 4096 Aug 13 11:15 ci
-rw-r--r--. 1 stack stack 681 Aug 13 11:15 default_passwords.yaml
drwxr-xr-x. 3 stack stack 4096 Aug 13 11:15 deployed-server
drwxr-xr-x. 4 stack stack 4096 Aug 13 11:15 docker
drwxr-xr-x. 4 stack stack 4096 Aug 13 11:15 environments
drwxr-xr-x. 6 stack stack 4096 Aug 13 11:15 extraconfig
drwxr-xr-x. 2 stack stack 4096 Aug 13 11:15 firstboot
-rw-r--r--. 1 stack stack 735 Aug 13 11:15 hosts-config.yaml
-rw-r--r--. 1 stack stack 325 Aug 13 11:15 j2_excludes.yaml
-rw-r--r--. 1 stack stack 2594 Aug 13 11:15 net-config-bond.yaml
-rw-r--r--. 1 stack stack 1895 Aug 13 11:15 net-config-bridge.yaml
-rw-r--r--. 1 stack stack 2298 Aug 13 11:15 net-config-linux-bridge.yaml
-rw-r--r--. 1 stack stack 1244 Aug 13 11:15 net-config-noop.yaml
-rw-r--r--. 1 stack stack 3246 Aug 13 11:15 net-config-static-bridge-with-external-dhcp.yaml
-rw-r--r--. 1 stack stack 2838 Aug 13 11:15 net-config-static-bridge.yaml
-rw-r--r--. 1 stack stack 2545 Aug 13 11:15 net-config-static.yaml
drwxr-xr-x. 5 stack stack 4096 Aug 13 11:15 network
-rw-r--r--. 1 stack stack 26967 Aug 13 11:15 overcloud.j2.yaml
-rw-r--r--. 1 stack stack 14608 Aug 13 11:15 overcloud-resource-registry-puppet.j2.yaml
drwxr-xr-x. 5 stack stack 4096 Aug 13 11:15 puppet
-rw-r--r--. 1 stack stack 6832 Aug 13 11:15 roles_data.yaml
drwxr-xr-x. 2 stack stack 4096 Aug 13 11:15 validation-scripts
The default mapping uses the root disk for Ceph Storage. However, most production environments use multiple separate disks for storage and partitions for journaling. In this situation, you define a storage map as part of the storage-environment.yaml file copied previously.
Edit the storage-environment.yaml file and add an additional section that contains the following:
resource_registry:
OS::TripleO::Services::CephMon: ../puppet/services/ceph-mon.yaml
OS::TripleO::Services::CephOSD: ../puppet/services/ceph-osd.yaml
OS::TripleO::Services::CephClient: ../puppet/services/ceph-client.yaml
parameter_defaults:
CinderEnableIscsiBackend: false
CinderEnableRbdBackend: true
CinderBackupBackend: ceph
NovaEnableRbdBackend: true
GlanceBackend: rbd
GnocchiBackend: rbd
ExtraConfig:
ceph::profile::params::osds:
'/dev/vdc':
journal: '/dev/vdb'
This adds extra Hiera data to the Overcloud, which Puppet uses as custom parameters during configuration. Use the ceph::profile::params::osds parameter to map the relevant disks and journal partitions. For example, a Ceph node with four disks might have the following assignments:
/dev/vda - The root disk containing the Overcloud image
/dev/vdb - The disk containing the journal partitions. This is usually a solid state disk (SSD) to aid with system performance.
/dev/vdc - The OSD disks
Overcloud Deployment
After introspection the undercloud knows which nodes are used for the deployment of the overcloud, but it may not know what overcloud node types are to be deployed.
********output trimmed*********
2018-08-13 14:00:31Z [overcloud]: CREATE_COMPLETE Stack CREATE completed successfully
Stack overcloud CREATE_COMPLETE
Host 192.168.122.151 not found in /home/stack/.ssh/known_hosts
Overcloud Endpoint: http://192.168.122.151:5000/v2.0
Overcloud Deployed
HINT: The entire deployment may take ~45-60 minutes
Below list explains the terminology used in the above command.
--compute-scale: The number of Compute nodes to scale out
--ceph-storage-scale: The number of Ceph Storage nodes to scale out
--templates [TEMPLATES]: The directory containing the Heat templates to deploy. If blank, the command uses the default template location at /usr/share/openstack-tripleo-heat-templates/
-e [EXTRA HEAT TEMPLATE], --extra-template [EXTRA HEAT TEMPLATE]: Extra environment files to pass to the overcloud deployment. Can be specified more than once. Note that the order of environment files passed to the openstack overcloud deploy command is important.
--neutron-tunnel-types: The tunnel types for the Neutron tenant network. To specify multiple values, use a comma separated string
--compute-flavor: The flavor to use for Compute nodes
--ceph-storage-flavor: The flavor to use for Ceph Storage nodes
--control-flavor: The flavor to use for Controller nodes
To check the status of overcloud deployment
To check the status of individual resource
To delete the existing overcloud stack and re-deploy
+--------------------------------------+-------------------------+--------+------------+-------------+--------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-------------------------+--------+------------+-------------+--------------------------+
| 9651eac2-f5de-410f-a5bf-a98772cd6790 | overcloud-cephstorage-0 | ACTIVE | - | Running | ctlplane=192.168.122.153 |
| 7ba43618-c52a-4dbd-b82d-726207011e0e | overcloud-compute-0 | ACTIVE | - | Running | ctlplane=192.168.122.157 |
| 609cb38a-6f14-42f3-b392-cfa1bcee3769 | overcloud-controller-0 | ACTIVE | - | Running | ctlplane=192.168.122.152 |
+--------------------------------------+-------------------------+--------+------------+-------------+--------------------------+
Below file will be created once the overcloud deployment is complete
# Clear any old environment that may conflict.
for key in $( set | awk '{FS="="} /^OS_/ {print $1}' ); do unset $key ; done
export OS_USERNAME=admin
export OS_TENANT_NAME=admin
export NOVA_VERSION=1.1
export OS_PROJECT_NAME=admin
export OS_PASSWORD=fdk8YmUHE9ujb7GXpqDAxsT7g
export OS_NO_CACHE=True
export COMPUTE_API_VERSION=1.1
export no_proxy=,192.168.122.151,192.168.122.151
export OS_CLOUDNAME=overcloud
export OS_AUTH_URL=http://192.168.122.151:5000/v2.0
export PYTHONWARNINGS="ignore:Certificate has no, ignore:A true SSLContext object is not available"
Now you can login to your nodes from the undercloud
The authenticity of host '192.168.122.157 (192.168.122.157)' can't be established.
ECDSA key fingerprint is SHA256:yaMnJm7HTfOrMeOiYT6PR6nm7DO7SQQgX1Bh4bNwaSU.
ECDSA key fingerprint is MD5:b6:f1:9b:f5:ac:93:b7:9d:2e:a9:9c:cb:0c:6c:a7:b7.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.122.157' (ECDSA) to the list of known hosts.
[heat-admin@overcloud-compute-0 ~]$ logout
Connection to 192.168.122.157 closed.
The authenticity of host '192.168.122.153 (192.168.122.153)' can't be established.
ECDSA key fingerprint is SHA256:goMTxDPCmNlYfmdQdZ44nw5iXWyQNtB5dbAnfVwGDqA.
ECDSA key fingerprint is MD5:e4:1d:32:b3:d0:e2:1f:44:ed:6d:0b:80:61:09:8f:4d.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.122.153' (ECDSA) to the list of known hosts.
[heat-admin@overcloud-controller-0 ~]$ logout
Connection to 192.168.122.153 closed.
The authenticity of host '192.168.122.152 (192.168.122.152)' can't be established.
ECDSA key fingerprint is SHA256:EajVihYirMBljOnA7eNPY0lVd1TGpMQRgvmqNUqrrQk.
ECDSA key fingerprint is MD5:d0:4b:40:ab:56:6f:f8:c2:fe:ee:61:0b:10:84:2b:c8.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.122.152' (ECDSA) to the list of known hosts.
[heat-admin@overcloud-cephstorage-0 ~]$
I hope the article was useful.
Please let me know your views and feedback in the comment section below.