Home / Openstack / Openstack – Configure Telemetry Module – ceilometer – Part 18

Openstack – Configure Telemetry Module – ceilometer – Part 18

This article will demonstrates the deployment of telemetry modules on Openstack environment. The telemetry services are developed in the name of celiometer. Celiometer provides a framework  for monitoring, alarming  and metering the OpenStack cloud  resources. The Celiometer efficiently polls metering data related to OpenStack services. It collects event & metering data by monitoring notifications sent from openstack services. It publishes collected data to various API targets including data-stores and message-queues. Celiometer creates an alarm when collected data breaks defined rules.

All the telemetry services will use the messaging bus to communicate with other openstack components.

Telemetry Components:

  • ceilometer-agent-compute
  • ceilometer-agent-central
  • ceilometer-agent-notification
  • ceilometer-collector
  • ceilometer-alarm-evaluator
  • ceilometer-alarm-notifier
  • ceilometer-api

 

 

 Configure Controller Node for Celiometer – Prerequisites:

 

1.Login to  the Openstack controller node.

2. Install MongoDB for telemetry services.

root@OSCTRL-UA:~# apt-get install mongodb-server mongodb-clients python-pymongo
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:
  libboost-filesystem1.54.0 libboost-program-options1.54.0
  libgoogle-perftools4 libpcrecpp0 libsnappy1 libtcmalloc-minimal4 libunwind8
  libv8-3.14.5 python-bson python-bson-ext python-gridfs python-pymongo-ext
The following NEW packages will be installed:
  libboost-filesystem1.54.0 libboost-program-options1.54.0
  libgoogle-perftools4 libpcrecpp0 libsnappy1 libtcmalloc-minimal4 libunwind8
  libv8-3.14.5 mongodb-clients mongodb-server python-bson python-bson-ext
  python-gridfs python-pymongo python-pymongo-ext
0 upgraded, 15 newly installed, 0 to remove and 44 not upgraded.
Need to get 14.7 MB of archives.
After this operation, 114 MB of additional disk space will be used.
Do you want to continue? [Y/n] Y

 

3. Edit the /etc/mongodb.conf and update the below sections.

Update the bind_ip as controller node IP.

bind_ip = 192.168.203.130

Add the key to reduce the journel file size for mongoDB.

smallfiles = true

 

4. Stop the mongoDB and remove the journal files if any. Once it’s done you can start the MongoDB to take effect of new settings.

root@OSCTRL-UA:~# service mongodb stop
mongodb stop/waiting
root@OSCTRL-UA:~# rm /var/lib/mongodb/journal/prealloc.*
rm: cannot remove ‘/var/lib/mongodb/journal/prealloc.*’: No such file or directory
root@OSCTRL-UA:~# service mongodb start
mongodb start/running, process 36834
root@OSCTRL-UA:~#

 

5. Create the celiometer database on MongoDB.

root@OSCTRL-UA:~# mongo --host OSCTRL-UA --eval 'db = db.getSiblingDB("ceilometer");db.addUser({user: "ceilometer",pwd: "ceilometerdb123",roles: [ "readWrite", "dbAdmin" ]})'
MongoDB shell version: 2.4.9
connecting to: OSCTRL-UA:27017/test
{
        "user" : "ceilometer",
        "pwd" : "4a434c760e1711668b029ab0a744b61f",
        "roles" : [
                "readWrite",
                "dbAdmin"
        ],
        "_id" : ObjectId("5628e718d34ba80568d83895")
}
root@OSCTRL-UA:~#

 

6.Source the admin credentials to gain the CLI access.

root@OSCTRL-UA:~# cat admin.rc
export OS_USERNAME=admin
export OS_PASSWORD=admin123
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://OSCTRL-UA:35357/v2.0
root@OSCTRL-UA:~#

 

7.Create the ceilometer user on keystone.

root@OSCTRL-UA:~# keystone user-create --name ceilometer --pass ceilometer123
+----------+----------------------------------+
| Property |              Value               |
+----------+----------------------------------+
|  email   |                                  |
| enabled  |               True               |
|    id    | a51353508ecf415fb0e7e8170300baf8 |
|   name   |            ceilometer            |
| username |            ceilometer            |
+----------+----------------------------------+
root@OSCTRL-UA:~#

 

8. Add the ceilometer to the admin role.

root@OSCTRL-UA:~# keystone user-role-add --user ceilometer --tenant service --role admin
root@OSCTRL-UA:~#

 

9.Create ceilometer service entity.

root@OSCTRL-UA:~# keystone service-create --name ceilometer --type metering --description "Telemetry"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |            Telemetry             |
|   enabled   |               True               |
|      id     | d4371a7560d243bcb48e9db4d49ce7e1 |
|     name    |            ceilometer            |
|     type    |             metering             |
+-------------+----------------------------------+
root@OSCTRL-UA:~#

 

10. Create the Ceilometer API endpoints.

root@OSCTRL-UA:~# keystone endpoint-create --service-id $(keystone service-list | awk '/ metering / {print $2}') --publicurl http://OSCTRL-UA:8777 --internalurl http://OSCTRL-UA:8777 --adminurl http://OSCTRL-UA:8777 --region regionOne
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
|   adminurl  |      http://OSCTRL-UA:8777       |
|      id     | b4534beb489d45af8af3aa62ede17053 |
| internalurl |      http://OSCTRL-UA:8777       |
|  publicurl  |      http://OSCTRL-UA:8777       |
|    region   |            regionOne             |
|  service_id | d4371a7560d243bcb48e9db4d49ce7e1 |
+-------------+----------------------------------+
root@OSCTRL-UA:~#

 

 

Install & Configure Ceilometer:

 

1. Login to the controller node.

2. Install the Ceilometer controller node packages.

root@OSCTRL-UA:~# apt-get install ceilometer-api ceilometer-collector ceilometer-agent-central ceilometer-agent-notification ceilometer-alarm-evaluator ceilometer-alarm-notifier python-ceilometerclient
Reading package lists... Done
Building dependency tree
Reading state information... Done
python-ceilometerclient is already the newest version.
python-ceilometerclient set to manually installed.
The following extra packages will be installed:
  ceilometer-common libsmi2ldbl python-bs4 python-ceilometer python-croniter
  python-dateutil python-happybase python-jsonpath-rw python-kazoo
  python-logutils python-msgpack python-pecan python-ply python-pymemcache
  python-pysnmp4 python-pysnmp4-apps python-pysnmp4-mibs python-singledispatch
  python-thrift python-tooz python-twisted python-twisted-conch
  python-twisted-lore python-twisted-mail python-twisted-names
  python-twisted-news python-twisted-runner python-twisted-web
  python-twisted-words python-waitress python-webtest smitools
Suggested packages:
  mongodb snmp-mibs-downloader python-kazoo-doc python-ply-doc
  python-pysnmp4-doc doc-base python-twisted-runner-dbg python-waitress-doc
  python-webtest-doc python-pyquery
The following NEW packages will be installed:
  ceilometer-agent-central ceilometer-agent-notification
  ceilometer-alarm-evaluator ceilometer-alarm-notifier ceilometer-api
  ceilometer-collector ceilometer-common libsmi2ldbl python-bs4
  python-ceilometer python-croniter python-dateutil python-happybase
  python-jsonpath-rw python-kazoo python-logutils python-msgpack python-pecan
  python-ply python-pymemcache python-pysnmp4 python-pysnmp4-apps
  python-pysnmp4-mibs python-singledispatch python-thrift python-tooz
  python-twisted python-twisted-conch python-twisted-lore python-twisted-mail
  python-twisted-names python-twisted-news python-twisted-runner
  python-twisted-web python-twisted-words python-waitress python-webtest
  smitools
0 upgraded, 38 newly installed, 0 to remove and 44 not upgraded.
Need to get 4,504 kB of archives.
After this operation, 28.4 MB of additional disk space will be used.
Do you want to continue? [Y/n] Y

 

3. Generate the random HEX value for Ceilometer.

root@OSCTRL-UA:~# openssl rand -hex 10
9342b8f01c16142bdeab
root@OSCTRL-UA:~#

 

4.Edit the /etc/ceilometer/ceilometer.conf file and update the following sections.

In [database] section,

[database]
connection = mongodb://ceilometer:ceilometerdb123@OSCTRL-UA:27017/ceilometer

 

In [DEFAULT] section,

[DEFAULT]
rpc_backend = rabbit
rabbit_host = OSCTRL-UA
rabbit_password = rabbit123
auth_strategy = keystone

 

In “[keystone_authtoken]” section,

[keystone_authtoken]
auth_uri = http://OSCTRL-UA:5000/v2.0
identity_uri = http://OSCTRL-UA:35357
admin_tenant_name = service
admin_user = ceilometer
admin_password = ceilometer123

 

In [service_credentials] section,

[service_credentials]
os_auth_url = http://OSCTRL-UA:5000/v2.0
os_username = ceilometer
os_tenant_name = service
os_password = ceilometer123

 

In “[publisher]” section , update the metering secret key which we have generated in the previous step.

[publisher]
metering_secret = 9342b8f01c16142bdeab

 

5. Restart the ceilometer services to take effect of the new changes.

root@OSCTRL-UA:~# service ceilometer-agent-central restart
ceilometer-agent-central stop/waiting
ceilometer-agent-central start/running, process 38562
root@OSCTRL-UA:~# service ceilometer-agent-notification restart
ceilometer-agent-notification stop/waiting
ceilometer-agent-notification start/running, process 38587
root@OSCTRL-UA:~# service ceilometer-api restart
ceilometer-api stop/waiting
ceilometer-api start/running, process 38607
root@OSCTRL-UA:~# service ceilometer-collector restart
ceilometer-collector stop/waiting
ceilometer-collector start/running, process 38626
root@OSCTRL-UA:~# service ceilometer-alarm-evaluator restart
ceilometer-alarm-evaluator stop/waiting
ceilometer-alarm-evaluator start/running, process 38648
root@OSCTRL-UA:~# service ceilometer-alarm-notifier restart
ceilometer-alarm-notifier stop/waiting
ceilometer-alarm-notifier start/running, process 38667
root@OSCTRL-UA:~#

 

 

Configure the Compute service for Telemetry:

 

1. Login to the compute node.

2. Install the telemetry compute service agent packages.

root@OSCMP-UA:~# apt-get install ceilometer-agent-compute
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:
  ceilometer-common libsmi2ldbl python-bs4 python-ceilometer
  python-ceilometerclient python-concurrent.futures python-croniter
  python-dateutil python-happybase python-ipaddr python-jsonpath-rw
  python-kazoo python-logutils python-msgpack python-pecan python-ply
  python-pymemcache python-pysnmp4 python-pysnmp4-apps python-pysnmp4-mibs
  python-retrying python-simplegeneric python-singledispatch
  python-swiftclient python-thrift python-tooz python-twisted
  python-twisted-conch python-twisted-lore python-twisted-mail
  python-twisted-names python-twisted-news python-twisted-runner
  python-twisted-web python-twisted-words python-waitress python-webtest
  python-wsme smitools
Suggested packages:
  snmp-mibs-downloader python-kazoo-doc python-ply-doc python-pysnmp4-doc
  doc-base python-twisted-runner-dbg python-waitress-doc python-webtest-doc
  python-pyquery
The following NEW packages will be installed:
  ceilometer-agent-compute ceilometer-common libsmi2ldbl python-bs4
  python-ceilometer python-ceilometerclient python-concurrent.futures
  python-croniter python-dateutil python-happybase python-ipaddr
  python-jsonpath-rw python-kazoo python-logutils python-msgpack python-pecan
  python-ply python-pymemcache python-pysnmp4 python-pysnmp4-apps
  python-pysnmp4-mibs python-retrying python-simplegeneric
  python-singledispatch python-swiftclient python-thrift python-tooz
  python-twisted python-twisted-conch python-twisted-lore python-twisted-mail
  python-twisted-names python-twisted-news python-twisted-runner
  python-twisted-web python-twisted-words python-waitress python-webtest
  python-wsme smitools
0 upgraded, 40 newly installed, 0 to remove and 43 not upgraded.
Need to get 4,715 kB of archives.
After this operation, 29.8 MB of additional disk space will be used.
Do you want to continue? [Y/n] y

 

3.Edit the /etc/ceilometer/ceilometer.conf file and update the following sections.

In [DEFAULT] section,

[DEFAULT]
rpc_backend = rabbit
rabbit_host = OSCTRL-UA
rabbit_password = rabbit123
auth_strategy = keystone

 

In “[keystone_authtoken]” section,

[keystone_authtoken]
auth_uri = http://OSCTRL-UA:5000/v2.0
identity_uri = http://OSCTRL-UA:35357
admin_tenant_name = service
admin_user = ceilometer
admin_password = ceilometer123

 

In [service_credentials] section,

[service_credentials]
os_auth_url = http://OSCTRL-UA:5000/v2.0
os_username = ceilometer
os_tenant_name = service
os_password = ceilometer123
os_endpoint_type = internalURL
os_region_name = regionOne

 

In “[publisher]” section , update the metering secret key.

[publisher]
metering_secret = 9342b8f01c16142bdeab

 

4. Edit the /etc/nova/nova.conf and update the default section.

[DEFAULT]
...........
instance_usage_audit = True
instance_usage_audit_period = hour
notify_on_state_change = vm_and_task_state
notification_driver = messagingv2

 

5. Restart the ceilometer compute agent.

root@OSCMP-UA:~# service ceilometer-agent-compute restart
ceilometer-agent-compute stop/waiting
ceilometer-agent-compute start/running, process 43580
root@OSCMP-UA:~#

 

6. Restart the nova-compute service to complete the installation.

root@OSCMP-UA:~# service nova-compute restart
nova-compute stop/waiting
nova-compute start/running, process 43646
root@OSCMP-UA:~#

 

 

Configure the Image service to use the Ceilometer:

 

1. Login to the controller node. (which acts as glance image server as well).

2. Edit the /etc/glance/glance-api.conf and /etc/glance/glance-registry.conf files and update the default sections.

[DEFAULT]
...
notification_driver = messagingv2

 

3. Restart the glance services.

root@OSCTRL-UA:~# service glance-registry restart
glance-registry stop/waiting
glance-registry start/running, process 38886
root@OSCTRL-UA:~# service glance-api restart
glance-api stop/waiting
glance-api start/running, process 38902
root@OSCTRL-UA:~#

 

 

Configure the Block Storage service to use Ceilometer:

 

1. Login to the controller node and Storage node.

2.Edit the /etc/cinder/cinder.conf file and update the default section on both controller node & Storage node.

[DEFAULT]
...
control_exchange = cinder
notification_driver = messagingv2

 

3. In the controller node , restart the storage services.

root@OSCTRL-UA:~# service cinder-api restart
cinder-api stop/waiting
cinder-api start/running, process 39005
root@OSCTRL-UA:~# service cinder-scheduler restart
cinder-scheduler stop/waiting
cinder-scheduler start/running, process 39026
root@OSCTRL-UA:~#

4.In Storage node, restart the storage services.

root@OSSTG-UA:~# service cinder-volume restart
cinder-volume stop/waiting
cinder-volume start/running, process 32018
root@OSSTG-UA:~#

 

 

Configure the Object Storage service to use Ceilometer:

 

1. The Telemetry service requires access to the Object Storage service using the ResellerAdmin role. Create the “ResellerAdmin” role. Source the admin credentials to use the CLI commands.

root@OSCTRL-UA:~# cat admin.rc
export OS_USERNAME=admin
export OS_PASSWORD=admin123
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://OSCTRL-UA:35357/v2.0
root@OSCTRL-UA:~# source admin.rc
root@OSCTRL-UA:~#
root@OSCTRL-UA:~# keystone role-create --name ResellerAdmin
+----------+----------------------------------+
| Property |              Value               |
+----------+----------------------------------+
|    id    | 443599f36fd84821877a460825144ade |
|   name   |          ResellerAdmin           |
+----------+----------------------------------+
root@OSCTRL-UA:~#

 

2.Add the “ResellerAdmin” role to ceilometer user.

root@OSCTRL-UA:~# keystone user-role-add --tenant service --user ceilometer --role 443599f36fd84821877a460825144ade
root@OSCTRL-UA:~#

 

3.To Configure the notifications, edit the /etc/swift/proxy-server.conf file and update the following section.

In the [filter:keystoneauth] section, add the ResellerAdmin role,

[filter:keystoneauth]
...
operator_roles = admin,_member_,ResellerAdmin

 

In the “[pipeline:main]” section,

[pipeline:main]
...
pipeline = authtoken cache healthcheck keystoneauth proxy-logging ceilometer proxy-server

 

Create the [filter:ceilometer] section and update like below to configure the notification.

[filter:ceilometer]
use = egg:ceilometer#swift
log_level = WARN

 

4. Add the “swift” user to the ceilometer group.

root@OSCTRL-UA:~# usermod -a -G ceilometer swift
root@OSCTRL-UA:~# id -a swift
uid=118(swift) gid=125(swift) groups=125(swift),4(adm),128(ceilometer)
root@OSCTRL-UA:~#

 

5. Restart the Object proxy service.

root@OSCTRL-UA:~# service swift-proxy restart
swift-proxy stop/waiting
swift-proxy start/running
root@OSCTRL-UA:~#

 

Verify the Telemetry Configuration:

1. Login to the controller node .

2. Source the admin credentials.

root@OSCTRL-UA:~# cat admin.rc
export OS_USERNAME=admin
export OS_PASSWORD=admin123
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://OSCTRL-UA:35357/v2.0
root@OSCTRL-UA:~# source admin.rc
root@OSCTRL-UA:~#

 

3.List the available meters.

root@OSCTRL-UA:~# ceilometer meter-list
+---------------------------------+------------+-----------+-----------------------------------------------------------------------+----------------------------------+----------------------------------+
| Name                            | Type       | Unit      | Resource ID                                                           | User ID                          | Project ID                       |
+---------------------------------+------------+-----------+-----------------------------------------------------------------------+----------------------------------+----------------------------------+
| cpu                             | cumulative | ns        | 863cc74a-49fd-4843-83c8-bc9597cae2ff                                  | 3f01d4f7aa9e477cb885334ab9c5929d | abe3af30f46b446fbae35a102457890c |
| cpu_util                        | gauge      | %         | 863cc74a-49fd-4843-83c8-bc9597cae2ff                                  | 3f01d4f7aa9e477cb885334ab9c5929d | abe3af30f46b446fbae35a102457890c |
| disk.device.read.bytes          | cumulative | B         | 863cc74a-49fd-4843-83c8-bc9597cae2ff-vda                              | 3f01d4f7aa9e477cb885334ab9c5929d | abe3af30f46b446fbae35a102457890c |
| disk.device.read.requests       | cumulative | request   | 863cc74a-49fd-4843-83c8-bc9597cae2ff-vda                              | 3f01d4f7aa9e477cb885334ab9c5929d | abe3af30f46b446fbae35a102457890c |
| disk.device.write.bytes         | cumulative | B         | 863cc74a-49fd-4843-83c8-bc9597cae2ff-vda                              | 3f01d4f7aa9e477cb885334ab9c5929d | abe3af30f46b446fbae35a102457890c |
| disk.device.write.requests      | cumulative | request   | 863cc74a-49fd-4843-83c8-bc9597cae2ff-vda                              | 3f01d4f7aa9e477cb885334ab9c5929d | abe3af30f46b446fbae35a102457890c |
| disk.read.bytes                 | cumulative | B         | 863cc74a-49fd-4843-83c8-bc9597cae2ff                                  | 3f01d4f7aa9e477cb885334ab9c5929d | abe3af30f46b446fbae35a102457890c |
| disk.read.bytes.rate            | gauge      | B/s       | 863cc74a-49fd-4843-83c8-bc9597cae2ff                                  | 3f01d4f7aa9e477cb885334ab9c5929d | abe3af30f46b446fbae35a102457890c |
| disk.read.requests              | cumulative | request   | 863cc74a-49fd-4843-83c8-bc9597cae2ff                                  | 3f01d4f7aa9e477cb885334ab9c5929d | abe3af30f46b446fbae35a102457890c |
| disk.read.requests.rate         | gauge      | request/s | 863cc74a-49fd-4843-83c8-bc9597cae2ff                                  | 3f01d4f7aa9e477cb885334ab9c5929d | abe3af30f46b446fbae35a102457890c |
| disk.write.bytes                | cumulative | B         | 863cc74a-49fd-4843-83c8-bc9597cae2ff                                  | 3f01d4f7aa9e477cb885334ab9c5929d | abe3af30f46b446fbae35a102457890c |
| disk.write.bytes.rate           | gauge      | B/s       | 863cc74a-49fd-4843-83c8-bc9597cae2ff                                  | 3f01d4f7aa9e477cb885334ab9c5929d | abe3af30f46b446fbae35a102457890c |
| disk.write.requests             | cumulative | request   | 863cc74a-49fd-4843-83c8-bc9597cae2ff                                  | 3f01d4f7aa9e477cb885334ab9c5929d | abe3af30f46b446fbae35a102457890c |
| disk.write.requests.rate        | gauge      | request/s | 863cc74a-49fd-4843-83c8-bc9597cae2ff                                  | 3f01d4f7aa9e477cb885334ab9c5929d | abe3af30f46b446fbae35a102457890c |
| image                           | gauge      | image     | 7d19b639-6950-42dc-a64d-91c6662e0613                                  | None                             | d14d6a07f862482398b3e3e4e8d581c6 |
| image                           | gauge      | image     | 95fafce7-ae0f-47e3-b1c9-5d2ebd1af885                                  | None                             | d14d6a07f862482398b3e3e4e8d581c6 |
| image.size                      | gauge      | B         | 7d19b639-6950-42dc-a64d-91c6662e0613                                  | None                             | d14d6a07f862482398b3e3e4e8d581c6 |
| image.size                      | gauge      | B         | 95fafce7-ae0f-47e3-b1c9-5d2ebd1af885                                  | None                             | d14d6a07f862482398b3e3e4e8d581c6 |
| instance                        | gauge      | instance  | 863cc74a-49fd-4843-83c8-bc9597cae2ff                                  | 3f01d4f7aa9e477cb885334ab9c5929d | abe3af30f46b446fbae35a102457890c |
| instance:m1.tiny                | gauge      | instance  | 863cc74a-49fd-4843-83c8-bc9597cae2ff                                  | 3f01d4f7aa9e477cb885334ab9c5929d | abe3af30f46b446fbae35a102457890c |
| network.incoming.bytes          | cumulative | B         | instance-00000020-863cc74a-49fd-4843-83c8-bc9597cae2ff-tapfe8c5af8-dd | 3f01d4f7aa9e477cb885334ab9c5929d | abe3af30f46b446fbae35a102457890c |
| network.incoming.bytes.rate     | gauge      | B/s       | instance-00000020-863cc74a-49fd-4843-83c8-bc9597cae2ff-tapfe8c5af8-dd | 3f01d4f7aa9e477cb885334ab9c5929d | abe3af30f46b446fbae35a102457890c |
| network.incoming.packets        | cumulative | packet    | instance-00000020-863cc74a-49fd-4843-83c8-bc9597cae2ff-tapfe8c5af8-dd | 3f01d4f7aa9e477cb885334ab9c5929d | abe3af30f46b446fbae35a102457890c |
| network.incoming.packets.rate   | gauge      | packet/s  | instance-00000020-863cc74a-49fd-4843-83c8-bc9597cae2ff-tapfe8c5af8-dd | 3f01d4f7aa9e477cb885334ab9c5929d | abe3af30f46b446fbae35a102457890c |
| network.outgoing.bytes          | cumulative | B         | instance-00000020-863cc74a-49fd-4843-83c8-bc9597cae2ff-tapfe8c5af8-dd | 3f01d4f7aa9e477cb885334ab9c5929d | abe3af30f46b446fbae35a102457890c |
| network.outgoing.bytes.rate     | gauge      | B/s       | instance-00000020-863cc74a-49fd-4843-83c8-bc9597cae2ff-tapfe8c5af8-dd | 3f01d4f7aa9e477cb885334ab9c5929d | abe3af30f46b446fbae35a102457890c |
| network.outgoing.packets        | cumulative | packet    | instance-00000020-863cc74a-49fd-4843-83c8-bc9597cae2ff-tapfe8c5af8-dd | 3f01d4f7aa9e477cb885334ab9c5929d | abe3af30f46b446fbae35a102457890c |
| network.outgoing.packets.rate   | gauge      | packet/s  | instance-00000020-863cc74a-49fd-4843-83c8-bc9597cae2ff-tapfe8c5af8-dd | 3f01d4f7aa9e477cb885334ab9c5929d | abe3af30f46b446fbae35a102457890c |
| storage.containers.objects      | gauge      | object    | abe3af30f46b446fbae35a102457890c/Lingesh-Container                    | None                             | abe3af30f46b446fbae35a102457890c |
| storage.containers.objects.size | gauge      | B         | abe3af30f46b446fbae35a102457890c/Lingesh-Container                    | None                             | abe3af30f46b446fbae35a102457890c |
| storage.objects                 | gauge      | object    | 332f6865332b45aa9cf0d79aacd1ae3b                                      | None                             | 332f6865332b45aa9cf0d79aacd1ae3b |
| storage.objects                 | gauge      | object    | abe3af30f46b446fbae35a102457890c                                      | None                             | abe3af30f46b446fbae35a102457890c |
| storage.objects                 | gauge      | object    | d14d6a07f862482398b3e3e4e8d581c6                                      | None                             | d14d6a07f862482398b3e3e4e8d581c6 |
| storage.objects.containers      | gauge      | container | 332f6865332b45aa9cf0d79aacd1ae3b                                      | None                             | 332f6865332b45aa9cf0d79aacd1ae3b |
| storage.objects.containers      | gauge      | container | abe3af30f46b446fbae35a102457890c                                      | None                             | abe3af30f46b446fbae35a102457890c |
| storage.objects.containers      | gauge      | container | d14d6a07f862482398b3e3e4e8d581c6                                      | None                             | d14d6a07f862482398b3e3e4e8d581c6 |
| storage.objects.size            | gauge      | B         | 332f6865332b45aa9cf0d79aacd1ae3b                                      | None                             | 332f6865332b45aa9cf0d79aacd1ae3b |
| storage.objects.size            | gauge      | B         | abe3af30f46b446fbae35a102457890c                                      | None                             | abe3af30f46b446fbae35a102457890c |
| storage.objects.size            | gauge      | B         | d14d6a07f862482398b3e3e4e8d581c6                                      | None                             | d14d6a07f862482398b3e3e4e8d581c6 |
+---------------------------------+------------+-----------+-----------------------------------------------------------------------+----------------------------------+----------------------------------+
root@OSCTRL-UA:~#

 

4. Download the OS image for testing purpose.

root@OSCTRL-UA:~# glance image-list
+--------------------------------------+---------------------+-------------+------------------+----------+--------+
| ID                                   | Name                | Disk Format | Container Format | Size     | Status |
+--------------------------------------+---------------------+-------------+------------------+----------+--------+
| 7d19b639-6950-42dc-a64d-91c6662e0613 | CirrOS 0.3.0        | qcow2       | bare             | 9761280  | active |
| 95fafce7-ae0f-47e3-b1c9-5d2ebd1af885 | CirrOS-0.3.4-x86_64 | qcow2       | bare             | 13287936 | active |
+--------------------------------------+---------------------+-------------+------------------+----------+--------+
root@OSCTRL-UA:~# glance image-download "CirrOS-0.3.4-x86_64" > cirros.img_test
root@OSCTRL-UA:~#

 

5. List available meters again to validate detection of the image download.

root@OSCTRL-UA:~# ceilometer meter-list |grep download
| image.download                  | delta      | B         | 95fafce7-ae0f-47e3-b1c9-5d2ebd1af885                                  | d154aa743ab4405c80055236c47ed98f | d14d6a07f862482398b3e3e4e8d581c6 |
root@OSCTRL-UA:~#

 

6. Retrieve the usage statics.

root@OSCTRL-UA:~# ceilometer statistics -m image.download
+--------+----------------------------+----------------------------+------------+------------+------------+------------+-------+----------+----------------------------+----------------------------+
| Period | Period Start               | Period End                 | Max        | Min        | Avg        | Sum        | Count | Duration | Duration Start             | Duration End               |
+--------+----------------------------+----------------------------+------------+------------+------------+------------+-------+----------+----------------------------+----------------------------+
| 0      | 2015-10-22T14:46:10.946043 | 2015-10-22T14:46:10.946043 | 13287936.0 | 13287936.0 | 13287936.0 | 13287936.0 | 1     | 0.0      | 2015-10-22T14:46:10.946043 | 2015-10-22T14:46:10.946043 |
+--------+----------------------------+----------------------------+------------+------------+------------+------------+-------+----------+----------------------------+----------------------------+
root@OSCTRL-UA:~#

 

The above command output confirms that telemetry service is working fine. Now our Openstack environment includes the telemetry service to measure the resource usage.

 

Hope this article is informative to you. Share it ! Be Sociable !!!

VMTURBO-CLOUD-CAPACITY

Leave a Reply

Your email address will not be published. Required fields are marked *