Openstack Compute service is the heart of IaaS (Infrastructure as Service) . Compute nodes are use to create the virtual instance and manage cloud computing systems. Openstack compute node (nova) interacts with keystone service for identity , communicates with glance for server OS images , works with Horizon to provide the dashboard for user access and administration. OpenStack Compute can scale horizontally on standard hardware (x86) by installing hyper-visors(Ex: KVM, Xen , VMware ESXi, Hyper-V ). Unlike other openstack services , Compute services has many modules, API’s and services. Here is the consolidated list of those.
The Compute service relies on a hypervisor to run virtual machine instances. OpenStack can use various hypervisors, but this guide uses KVM.
Configure the controller node for Compute services:
1. Login to the openstack controller node & install the compute packages which are necessary for controller node.
root@OSCTRL-UA:~# apt-get install nova-api nova-cert nova-conductor nova-consoleauth nova-novncproxy nova-scheduler python-novaclient Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: libblas3 libgfortran3 libjs-swfobject liblapack3 libquadmath0 nova-common novnc python-amqplib python-cliff python-cliff-doc python-cmd2 python-ecdsa python-jinja2 python-m2crypto python-neutronclient python-nova python-novnc python-numpy python-oslo.rootwrap python-paramiko python-pyasn1 python-pyparsing python-rfc3986 websockify Suggested packages: python-amqplib-doc python-jinja2-doc gcc gfortran python-dev python-nose python-numpy-dbg python-numpy-doc doc-base The following NEW packages will be installed: libblas3 libgfortran3 libjs-swfobject liblapack3 libquadmath0 nova-api nova-cert nova-common nova-conductor nova-consoleauth nova-novncproxy nova-scheduler novnc python-amqplib python-cliff python-cliff-doc python-cmd2 python-ecdsa python-jinja2 python-m2crypto python-neutronclient python-nova python-novaclient python-novnc python-numpy python-oslo.rootwrap python-paramiko python-pyasn1 python-pyparsing python-rfc3986 websockify 0 upgraded, 31 newly installed, 0 to remove and 17 not upgraded. Need to get 7,045 kB of archives. After this operation, 46.0 MB of additional disk space will be used. Do you want to continue? [Y/n] y Get:1 http://in.archive.ubuntu.com/ubuntu/ trusty-updates/main libquadmath0 amd64 4.8.4-2ubuntu1~14.04 [126 kB]
2. Compute services stores information on Database to retrieve the data quickly. Configure the Compute service with database credentials. Add the below entry in nova.conf file.
root@OSCTRL-UA:~# tail -4 /etc/nova/nova.conf [database] connection = mysql://nova:novadb123@OSCTRL-UA/nova
3.Add the message queue configuration on nova.conf file. We are using RabbitMQ as message queue service.
Note: You need to add the lines below under [default] section in nova.conf.
root@OSCTRL-UA:~# grep -i rabbit -A5 /etc/nova/nova.conf #Add Rabbit MQ config rpc_backend = rabbit rabbit_host = OSCTRL-UA rabbit_password = rabbit123
4.The below configuration required for guest VNC config. Add the controller node IP address like below.
Note: You need to add the lines below under [default] section in nova.conf.
root@OSCTRL-UA:~# grep VNC -A4 /etc/nova/nova.conf #VNC my_ip = 192.168.203.130 vncserver_listen = 192.168.203.130 vncserver_proxyclient_address = 192.168.203.130 root@OSCTRL-UA:~#
5.Remove the nova sqllite DB since we are using mysql. sqllite is default test database on Ubuntu.
root@OSCTRL-UA:~# rm /var/lib/nova/nova.sqlite root@OSCTRL-UA:~#
6.Create the nova DB user on mysql.
root@OSCTRL-UA:~# mysql -u root -pstack Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 51 Server version: 5.5.44-0ubuntu0.14.04.1 (Ubuntu) Copyright (c) 2000, 2015, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> CREATE DATABASE nova; Query OK, 1 row affected (0.00 sec) mysql> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'novadb123'; Query OK, 0 rows affected (0.00 sec) mysql> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'novadb123'; Query OK, 0 rows affected (0.00 sec) mysql> exit Bye root@OSCTRL-UA:~#
7. Create the Compute service tables on Mysql. (nova).
root@OSCTRL-UA:~# su -s /bin/sh -c "nova-manage db sync" nova 2015-09-28 04:26:33.366 20105 INFO migrate.versioning.api [-] 215 -> 216... 2015-09-28 04:26:37.482 20105 INFO migrate.versioning.api [-] done 2015-09-28 04:26:37.483 20105 INFO migrate.versioning.api [-] 216 -> 217... 2015-09-28 04:26:37.487 20105 INFO migrate.versioning.api [-] done 2015-09-28 04:26:37.488 20105 INFO migrate.versioning.api [-] 217 -> 218... 2015-09-28 04:26:37.492 20105 INFO migrate.versioning.api [-] done 2015-09-28 04:26:37.493 20105 INFO migrate.versioning.api [-] 218 -> 219... 2015-09-28 04:26:37.497 20105 INFO migrate.versioning.api [-] done 2015-09-28 04:26:37.498 20105 INFO migrate.versioning.api [-] 219 -> 220... 2015-09-28 04:26:37.503 20105 INFO migrate.versioning.api [-] done 2015-09-28 04:26:37.504 20105 INFO migrate.versioning.api [-] 220 -> 221... 2015-09-28 04:26:37.509 20105 INFO migrate.versioning.api [-] done 2015-09-28 04:26:37.510 20105 INFO migrate.versioning.api [-] 221 -> 222... 2015-09-28 04:26:37.515 20105 INFO migrate.versioning.api [-] done 2015-09-28 04:26:37.516 20105 INFO migrate.versioning.api [-] 222 -> 223... 2015-09-28 04:26:37.520 20105 INFO migrate.versioning.api [-] done 2015-09-28 04:26:37.521 20105 INFO migrate.versioning.api [-] 223 -> 224... 2015-09-28 04:26:37.525 20105 INFO migrate.versioning.api [-] done 2015-09-28 04:26:37.526 20105 INFO migrate.versioning.api [-] 224 -> 225... 2015-09-28 04:26:37.531 20105 INFO migrate.versioning.api [-] done 2015-09-28 04:26:37.531 20105 INFO migrate.versioning.api [-] 225 -> 226... 2015-09-28 04:26:37.538 20105 INFO migrate.versioning.api [-] done 2015-09-28 04:26:37.538 20105 INFO migrate.versioning.api [-] 226 -> 227... 2015-09-28 04:26:37.545 20105 INFO migrate.versioning.api [-] done 2015-09-28 04:26:37.546 20105 INFO migrate.versioning.api [-] 227 -> 228... 2015-09-28 04:26:37.575 20105 INFO migrate.versioning.api [-] done 2015-09-28 04:26:37.576 20105 INFO migrate.versioning.api [-] 228 -> 229... 2015-09-28 04:26:37.605 20105 INFO migrate.versioning.api [-] done 2015-09-28 04:26:37.606 20105 INFO migrate.versioning.api [-] 229 -> 230... 2015-09-28 04:26:37.654 20105 INFO migrate.versioning.api [-] done 2015-09-28 04:26:37.654 20105 INFO migrate.versioning.api [-] 230 -> 231... 2015-09-28 04:26:37.702 20105 INFO migrate.versioning.api [-] done 2015-09-28 04:26:37.703 20105 INFO migrate.versioning.api [-] 231 -> 232... 2015-09-28 04:26:37.962 20105 INFO migrate.versioning.api [-] done 2015-09-28 04:26:37.963 20105 INFO migrate.versioning.api [-] 232 -> 233... 2015-09-28 04:26:38.006 20105 INFO migrate.versioning.api [-] done 2015-09-28 04:26:38.006 20105 INFO migrate.versioning.api [-] 233 -> 234... 2015-09-28 04:26:38.042 20105 INFO migrate.versioning.api [-] done 2015-09-28 04:26:38.043 20105 INFO migrate.versioning.api [-] 234 -> 235... 2015-09-28 04:26:38.048 20105 INFO migrate.versioning.api [-] done 2015-09-28 04:26:38.049 20105 INFO migrate.versioning.api [-] 235 -> 236... 2015-09-28 04:26:38.054 20105 INFO migrate.versioning.api [-] done 2015-09-28 04:26:38.055 20105 INFO migrate.versioning.api [-] 236 -> 237... 2015-09-28 04:26:38.060 20105 INFO migrate.versioning.api [-] done 2015-09-28 04:26:38.060 20105 INFO migrate.versioning.api [-] 237 -> 238... 2015-09-28 04:26:38.067 20105 INFO migrate.versioning.api [-] done 2015-09-28 04:26:38.068 20105 INFO migrate.versioning.api [-] 238 -> 239... 2015-09-28 04:26:38.072 20105 INFO migrate.versioning.api [-] done 2015-09-28 04:26:38.073 20105 INFO migrate.versioning.api [-] 239 -> 240... 2015-09-28 04:26:38.079 20105 INFO migrate.versioning.api [-] done 2015-09-28 04:26:38.080 20105 INFO migrate.versioning.api [-] 240 -> 241... 2015-09-28 04:26:38.084 20105 INFO migrate.versioning.api [-] done 2015-09-28 04:26:38.085 20105 INFO migrate.versioning.api [-] 241 -> 242... 2015-09-28 04:26:38.089 20105 INFO migrate.versioning.api [-] done 2015-09-28 04:26:38.090 20105 INFO migrate.versioning.api [-] 242 -> 243... 2015-09-28 04:26:38.095 20105 INFO migrate.versioning.api [-] done 2015-09-28 04:26:38.096 20105 INFO migrate.versioning.api [-] 243 -> 244... 2015-09-28 04:26:38.110 20105 INFO migrate.versioning.api [-] done 2015-09-28 04:26:38.111 20105 INFO migrate.versioning.api [-] 244 -> 245... 2015-09-28 04:26:38.187 20105 INFO migrate.versioning.api [-] done 2015-09-28 04:26:38.188 20105 INFO migrate.versioning.api [-] 245 -> 246... 2015-09-28 04:26:38.207 20105 INFO migrate.versioning.api [-] done 2015-09-28 04:26:38.208 20105 INFO migrate.versioning.api [-] 246 -> 247... 2015-09-28 04:26:38.259 20105 INFO migrate.versioning.api [-] done 2015-09-28 04:26:38.260 20105 INFO migrate.versioning.api [-] 247 -> 248... 2015-09-28 04:26:38.267 20105 INFO 248_add_expire_reservations_index [-] Skipped adding reservations_deleted_expire_idx because an equivalent index already exists. 2015-09-28 04:26:38.272 20105 INFO migrate.versioning.api [-] done 2015-09-28 04:26:38.272 20105 INFO migrate.versioning.api [-] 248 -> 249... 2015-09-28 04:26:38.290 20105 INFO migrate.versioning.api [-] done 2015-09-28 04:26:38.291 20105 INFO migrate.versioning.api [-] 249 -> 250... 2015-09-28 04:26:38.309 20105 INFO migrate.versioning.api [-] done 2015-09-28 04:26:38.309 20105 INFO migrate.versioning.api [-] 250 -> 251... 2015-09-28 04:26:38.338 20105 INFO migrate.versioning.api [-] done 2015-09-28 04:26:38.339 20105 INFO migrate.versioning.api [-] 251 -> 252... 2015-09-28 04:26:38.431 20105 INFO migrate.versioning.api [-] done 2015-09-28 04:26:38.432 20105 INFO migrate.versioning.api [-] 252 -> 253... 2015-09-28 04:26:38.463 20105 INFO migrate.versioning.api [-] done 2015-09-28 04:26:38.464 20105 INFO migrate.versioning.api [-] 253 -> 254... 2015-09-28 04:26:38.498 20105 INFO migrate.versioning.api [-] done root@OSCTRL-UA:~#
8. Create the nova users on keystone. So that Compute uses to authenticate with the Identity Service.
root@OSCTRL-UA:~# keystone user-create --name=nova --pass=nova123 --email=nova@unixarena.com +----------+----------------------------------+ | Property | Value | +----------+----------------------------------+ | email | nova@unixarena.com | | enabled | True | | id | 0a8ef9375329415488361b4ea7267443 | | name | nova | | username | nova | +----------+----------------------------------+ root@OSCTRL-UA:~#
9. Provide the admin role to the nova user.
root@OSCTRL-UA:~# keystone user-role-add --user=nova --tenant=service --role=admin root@OSCTRL-UA:~#
10. Edit the nova.conf to update the keystone credentials that we create it .
Note: You need to add “auth_strategy = keystone” under [default] section in nova.conf.
root@OSCTRL-UA:~# grep keystone -A8 /etc/nova/nova.conf auth_strategy = keystone [keystone_authtoken] auth_uri = http://OSCTRL-UA:5000 auth_host = OSCTRL-UA auth_port = 35357 auth_protocol = http admin_tenant_name = service admin_user = nova admin_password = nova123 root@OSCTRL-UA:~#
11. Register the compute service with identity service.
root@OSCTRL-UA:~# keystone service-create --name=nova --type=compute --description="OpenStack Compute" +-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | OpenStack Compute | | enabled | True | | id | 083b455a487647bbaa05a4a53b3a338f | | name | nova | | type | compute | +-------------+----------------------------------+ root@OSCTRL-UA:~#
12. Create the end point service for nova.
root@OSCTRL-UA:~# keystone endpoint-create --service-id=$(keystone service-list | awk '/ compute / {print $2}') --publicurl=http://OSCTRL-UA:8774/v2/%\(tenant_id\)s --internalurl=http://OSCTRL-UA:8774/v2/%\(tenant_id\)s --adminurl=http://OSCTRL-UA:8774/v2/%\(tenant_id\)s +-------------+----------------------------------------+ | Property | Value | +-------------+----------------------------------------+ | adminurl | http://OSCTRL-UA:8774/v2/%(tenant_id)s | | id | 4e2f418ef1eb4083a655e0a4eb60b736 | | internalurl | http://OSCTRL-UA:8774/v2/%(tenant_id)s | | publicurl | http://OSCTRL-UA:8774/v2/%(tenant_id)s | | region | regionOne | | service_id | 083b455a487647bbaa05a4a53b3a338f | +-------------+----------------------------------------+ root@OSCTRL-UA:~#
13. Restart the services.
root@OSCTRL-UA:~# service nova-api restart; service nova-cert restart; service nova-consoleauth restart; service nova-scheduler restart; service nova-conductor restart; service nova-novncproxy restart; nova-api stop/waiting nova-api start/running, process 20313 nova-cert stop/waiting nova-cert start/running, process 20330 nova-consoleauth stop/waiting nova-consoleauth start/running, process 20347 nova-scheduler stop/waiting nova-scheduler start/running, process 20366 nova-conductor stop/waiting nova-conductor start/running, process 20385 nova-novncproxy stop/waiting nova-novncproxy start/running, process 20400 root@OSCTRL-UA:~#
Verify the service status,
root@OSCTRL-UA:~# service nova-api status; service nova-cert status; service nova-consoleauth status; service nova-scheduler status; service nova-conductor status; service nova-novncproxy status nova-api start/running, process 20313 nova-cert start/running, process 20330 nova-consoleauth start/running, process 20347 nova-scheduler start/running, process 20366 nova-conductor start/running, process 20385 nova-novncproxy start/running, process 20400 root@OSCTRL-UA:~#
14. You should be able to verify the nova configuration by listing the images.
root@OSCTRL-UA:~# nova image-list +--------------------------------------+--------------+--------+--------+ | ID | Name | Status | Server | +--------------------------------------+--------------+--------+--------+ | 7d19b639-6950-42dc-a64d-91c6662e0613 | CirrOS 0.3.0 | ACTIVE | | +--------------------------------------+--------------+--------+--------+ root@OSCTRL-UA:~#
We have successfully configured the compute configuration on the controller node.
Click page 2 to see the configuration on the Compute node.
Configure a compute node:
1.Login to Compute node and install compute packages on it.
root@OSCMP-UA:~# apt-get install nova-compute-kvm python-guestfs Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: alembic binutils btrfs-tools cryptsetup cryptsetup-bin fontconfig-config fonts-dejavu-core genisoimage ghostscript gsfonts icoutils ieee-data jfsutils kpartx ldmtool libauthen-sasl-perl libcryptsetup4 libcups2
Click here to see the complete, Nova installation log
2. Make the kernel to readable for normal users like qemu & libguestfs. But default , it will be set to 0600 due to security reason.
root@OSCMP-UA:~# dpkg-statoverride --update --add root root 0644 /boot/vmlinuz-$(uname -r)
To make the kernel readable for future updates, create a file like below & make it executable.
root@OSCMP-UA:~# cat /etc/kernel/postinst.d/statoverride #!/bin/sh version="$1" # passing the kernel version is required [ -z "${version}" ] && exit 0 dpkg-statoverride --update --add root root 0644 /boot/vmlinuz-${version} root@OSCMP-UA:~# root@OSCMP-UA:~# chmod +x /etc/kernel/postinst.d/statoverride
3.Edit the /etc/nova/nova/conf file like below.
Under [DEFAULT] ,
#Glance Node glance_host = OSCTRL-UA #VNC my_ip = 192.168.203.131 vnc_enabled = True vncserver_listen = 0.0.0.0 vncserver_proxyclient_address = 192.168.203.131 novncproxy_base_url = http://OSCTRL-UA:6080/vnc_auto.html #Provide the MQ details rpc_backend = rabbit rabbit_host = OSCTRL-UA rabbit_password = rabbit123 #Nova service to use keystone as auth service auth_strategy = keystone [keystone_authtoken] auth_uri = http://OSCTRL-UA:5000 auth_host = OSCTRL-UA auth_port = 35357 auth_protocol = http admin_tenant_name = service admin_user = nova admin_password = nova123 #Proivde the DB location [database] connection = mysql://nova:novadb123@OSCTRL-UA/nova
4. Run the command below to check whether your compute node supports KVM or not. If the value returned as greater than zero , it supports KVM and doesn’t require any configuration changes.
root@OSCMP-UA:~# egrep -c '(vmx|svm)' /proc/cpuinfo 0 root@OSCMP-UA:~#
As per output above , our compute node doesn’t support KVM. Therefore , we will use qemu.
Set “virt_type=qemu” in libvirt.conf like below.
root@OSCMP-UA:~# cat /etc/nova/nova-compute.conf [DEFAULT] compute_driver=libvirt.LibvirtDriver [libvirt] virt_type=kvm root@OSCMP-UA:~# vi /etc/nova/nova-compute.conf root@OSCMP-UA:~# cat /etc/nova/nova-compute.conf [DEFAULT] compute_driver=libvirt.LibvirtDriver [libvirt] virt_type=qemu root@OSCMP-UA:~#
5.Remove the default test database in the compute node.
root@OSCMP-UA:~# rm /var/lib/nova/nova.sqlite root@OSCMP-UA:~#
6. Restart the compute service.
root@OSCMP-UA:~# service nova-compute restart nova-compute stop/waiting nova-compute start/running, process 14735 root@OSCMP-UA:~# tail -f /var/log/nova/nova-compute.log 2015-09-28 05:15:28.996 14696 TRACE oslo.messaging._drivers.impl_rabbit 2015-09-28 05:15:29.006 14696 INFO oslo.messaging._drivers.impl_rabbit [req-37a1f7ee-29fb-4f12-9954-0626f71105d3 ] Delaying reconnect for 1.0 seconds... 2015-09-28 05:15:30.011 14696 INFO oslo.messaging._drivers.impl_rabbit [req-37a1f7ee-29fb-4f12-9954-0626f71105d3 ] Connecting to AMQP server on OSCTRL-UA:5672 2015-09-28 05:15:30.037 14696 INFO oslo.messaging._drivers.impl_rabbit [req-37a1f7ee-29fb-4f12-9954-0626f71105d3 ] Connected to AMQP server on OSCTRL-UA:5672 2015-09-28 05:15:54.794 14735 INFO nova.virt.driver [-] Loading compute driver 'libvirt.LibvirtDriver' 2015-09-28 05:15:54.799 14735 INFO nova.openstack.common.periodic_task [-] Skipping periodic task _periodic_update_dns because its interval is negative 2015-09-28 05:15:54.823 14735 INFO oslo.messaging._drivers.impl_rabbit [req-521a020c-9a70-4382-8f31-9d2b54f2830f ] Connecting to AMQP server on OSCTRL-UA:5672 2015-09-28 05:15:54.834 14735 INFO oslo.messaging._drivers.impl_rabbit [req-521a020c-9a70-4382-8f31-9d2b54f2830f ] Connected to AMQP server on OSCTRL-UA:5672 2015-09-28 05:15:54.838 14735 INFO oslo.messaging._drivers.impl_rabbit [req-521a020c-9a70-4382-8f31-9d2b54f2830f ] Connecting to AMQP server on OSCTRL-UA:5672 2015-09-28 05:15:54.846 14735 INFO oslo.messaging._drivers.impl_rabbit [req-521a020c-9a70-4382-8f31-9d2b54f2830f ] Connected to AMQP server on OSCTRL-UA:5672
Verify the compute configuration:
1. Login to controller node and check the nova services.
root@OSCTRL-UA:~# nova service-list +----+------------------+-----------+----------+---------+-------+----------------------------+-----------------+ | Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +----+------------------+-----------+----------+---------+-------+----------------------------+-----------------+ | 1 | nova-cert | OSCTRL-UA | internal | enabled | up | 2015-09-28T01:33:15.000000 | - | | 2 | nova-consoleauth | OSCTRL-UA | internal | enabled | up | 2015-09-28T01:33:16.000000 | - | | 3 | nova-scheduler | OSCTRL-UA | internal | enabled | up | 2015-09-28T01:33:16.000000 | - | | 4 | nova-conductor | OSCTRL-UA | internal | enabled | up | 2015-09-28T01:33:16.000000 | - | | 5 | nova-compute | OSCMP-UA | nova | enabled | up | 2015-09-28T01:33:11.000000 | - | +----+------------------+-----------+----------+---------+-------+----------------------------+-----------------+ root@OSCTRL-UA:~#
We have successfully configured the compute service.
If you are getting output like below (State=down), there must be some misconfiguration in nova.conf on controller node or compute node. You need to re-validate the [default] tab entries.
root@OSCTRL-UA:~# nova service-list +----+------------------+-----------+----------+---------+-------+------------+-----------------+ | Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +----+------------------+-----------+----------+---------+-------+------------+-----------------+ | 1 | nova-cert | OSCTRL-UA | internal | enabled | down | - | - | | 2 | nova-consoleauth | OSCTRL-UA | internal | enabled | down | - | - | | 3 | nova-scheduler | OSCTRL-UA | internal | enabled | down | - | - | | 4 | nova-conductor | OSCTRL-UA | internal | enabled | down | - | - | +----+------------------+-----------+----------+---------+-------+------------+-----------------+ root@OSCTRL-UA:~#
In compute node, you will get message like below:
# tail -f /var/log/nova/nova-compute.log
2015-09-28 05:15:28.993 14696 WARNING nova.conductor.api [req-37a1f7ee-29fb-4f12-9954-0626f71105d3 None] Timed out waiting for nova-conductor. Is it running? Or did this service start before nova-conductor? Reattempting establishment of nova-conductor connection…
2015-09-28 05:15:28.996 14696 ERROR oslo.messaging._drivers.impl_rabbit [req-37a1f7ee-29fb-4f12-9954-0626f71105d3 ] Failed to publish message to topic ‘conductor’: [Errno 104] Connection reset by peer
Nova configuration should looks like below. ( [DEFAULT] , [database] , [keystone_authtoken] elements)
root@OSCMP-UA:~# egrep -i "default|database|keystone_" -A35 /etc/nova/nova.conf [DEFAULT] dhcpbridge_flagfile=/etc/nova/nova.conf dhcpbridge=/usr/bin/nova-dhcpbridge logdir=/var/log/nova state_path=/var/lib/nova lock_path=/var/lock/nova force_dhcp_release=True libvirt_use_virtio_for_bridges=True verbose=True ec2_private_dns_show_ip=True api_paste_config=/etc/nova/api-paste.ini enabled_apis=ec2,osapi_compute,metadata #Glance Node glance_host = OSCTRL-UA #VNC my_ip = 192.168.203.131 vnc_enabled = True vncserver_listen = 0.0.0.0 vncserver_proxyclient_address = 192.168.203.131 novncproxy_base_url = http://OSCTRL-UA:6080/vnc_auto.html #Nova service to use keystone as auth service auth_strategy = keystone #Provide the MQ details rpc_backend = rabbit rabbit_host = OSCTRL-UA rabbit_password = rabbit123 #Proivde the DB location [database] connection = mysql://nova:novadb123@OSCTRL-UA/nova [keystone_authtoken] auth_uri = http://OSCTRL-UA:5000 auth_host = OSCTRL-UA auth_port = 35357 auth_protocol = http admin_tenant_name = service admin_user = nova admin_password = nova123 root@OSCMP-UA:~#
Hope this article is informative to you.
Share it !! Be Sociable !!!