Thursday 7 August 2014

OpenStack Icehouse Single Node Deployment

Preparing your node

  1. Preparing Ubuntu
After you install Ubuntu 12.04 Server 64bits, Go in sudo mode and don't leave it until the end of this guide:
sudo -i
1. Add Icehouse repositories:
apt-get install -y python-software-properties
add-apt-repository cloud-archive:icehouse
2. Update your system:
apt-get -y update && apt-get -y upgrade && apt-get -y dist-upgrade

It could be necessary to reboot your system in case you have a kernel upgrade
Networking

Only one NIC should have an internet access, the other is for most Openstack-related operations and configurations:

#For Exposing OpenStack API over the internet
auto eth1
iface eth1 inet static
address 10.43.1.55
netmask 255.255.255.0
gateway 10.43.1.1
dns-nameservers 172.20.25.111

#Not internet connected(used for OpenStack management)
auto eth0
iface eth0 inet static
address 192.168.10.55
netmask 255.255.255.0

Restart the networking service:

service networking restart




  1. MySQL & RabbitMQ & Others
  1. Install MySQL:

apt-get install -y mysql-server python-mysqldb

Configure mysql to accept all incoming requests:

sed -i 's/127.0.0.1/0.0.0.0/g' /etc/mysql/my.cnf
service mysql restart

2. Install RabbitMQ:

apt-get install -y rabbitmq-server
change the password of guest user for rabbitmq-server.
rabbitmqctl change_password guest root123
3. Install NTP service:

apt-get install -y ntp

Databases set up

Setting up Databases:

Either use the script:

wget https://raw2.github.com/Ch00k/OpenStack-Havana-Install-Guide/master/populate_database.sh
sh populate_database.sh

Or execute all of the following manually:

mysql -u root -p <your_mysql_root_password>

# Keystone
CREATE DATABASE keystone;
GRANT ALL ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'openstacktest';
GRANT ALL ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'openstacktest';
GRANT ALL ON keystone.* TO 'keystone'@'192.168.10.55' IDENTIFIED BY 'openstacktest';
GRANT ALL ON keystone.* TO 'keystone'@'10.43.1.55' IDENTIFIED BY 'openstacktest';
FLUSH PRIVILEGES;
quit;
(test database access and show databases with user keystone)

# Glance
mysql -u root -p your_mysql_root_password
CREATE DATABASE glance;
GRANT ALL ON glance.* TO 'glance'@'%' IDENTIFIED BY 'openstacktest';
GRANT ALL ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'openstacktest';
GRANT ALL ON glance.* TO 'glance'@'192.168.10.55' IDENTIFIED BY 'openstacktest';
GRANT ALL ON glance.* TO 'glance'@'10.43.1.55' IDENTIFIED BY 'openstacktest';
FLUSH PRIVILEGES;
quit;
(test database access and show databases with user glance)

# Neutron
mysql -u root -p your_mysql_root_password
CREATE DATABASE neutron;
GRANT ALL ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'openstacktest';
GRANT ALL ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'openstacktest';
GRANT ALL ON neutron.* TO 'neutron'@'192.168.10.55' IDENTIFIED BY 'openstacktest';
GRANT ALL ON neutron.* TO 'neutron'@'10.43.1.55' IDENTIFIED BY 'openstacktest';
FLUSH PRIVILEGES;
quit;
(test database access and show databases with user neutron)

# Nova
mysql -u root -p your_mysql_root_password
CREATE DATABASE nova;
GRANT ALL ON nova.* TO 'nova'@'%' IDENTIFIED BY 'openstacktest';
GRANT ALL ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'openstacktest';
GRANT ALL ON nova.* TO 'nova'@'192.168.10.55' IDENTIFIED BY 'openstacktest';
GRANT ALL ON nova.* TO 'nova'@'10.43.1.55' IDENTIFIED BY 'openstacktest';
FLUSH PRIVILEGES;
quit;
(test database access and show databases with user nova)

# Cinder
mysql -u root -p your_mysql_root_password
CREATE DATABASE cinder;
GRANT ALL ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'openstacktest';
GRANT ALL ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'openstacktest';
GRANT ALL ON cinder.* TO 'cinder'@'192.168.10.55' IDENTIFIED BY 'openstacktest';
GRANT ALL ON cinder.* TO 'cinder'@'10.43.1.55' IDENTIFIED BY 'openstacktest';
FLUSH PRIVILEGES;
quit;
(test database access and show databases with user cinder)

Others

Install other services:

apt-get install -y vlan bridge-utils

Enable IP_Forwarding:

sed -i 's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/' /etc/sysctl.conf

# To save you from rebooting, perform the following
sysctl net.ipv4.ip_forward=1

    3. Keystone

  1. Start by the keystone packages:
    apt-get install -y keystone
  2. Verify your keystone is running:
    service keystone status
  3. Adapt the connection attribute in the /etc/keystone/keystone.conf to the new database:
    connection = mysql://keystone:openstacktest@192.168.10.55/keystone
  4. Remove Keystone SQLite database:
    rm /var/lib/keystone/keystone.db
  5. Restart the identity service then synchronize the database:
    service keystone restart
    keystone-manage db_sync
  6. Fill up the keystone database using the two scripts available in the Scripts folder of this git repository:
    #Modify the HOST_IP and EXT_HOST_IP variables before executing the scripts
    wget https://raw2.github.com/Ch00k/OpenStack-Havana-Install-Guide/master/keystone_basic.sh
    wget https://raw2.github.com/Ch00k/OpenStack-Havana-Install-Guide/master/keystone_endpoints_basic.sh
  7. sh keystone_basic.sh
  8. sh keystone_endpoints_basic.sh
  9. Create a simple credential file and load it so you won't be bothered later:

nano/vi keystone_source

#Paste the following:
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=openstacktest
export OS_AUTH_URL="http://10.43.1.55:5000/v2.0/"

# Load it:
source keystone_source

To test Keystone, just use a simple CLI command:

keystone user-list

4. Glance

  1. apt-get install -y glance
  2. Verify your glance services are running:
    service glance-api status
    service glance-registry status
  3. Update the /etc/glance/glance-api-paste.ini and /etc/glance/glance-registry-paste.ini with:
    [filter:authtoken]
    paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
    auth_host = 192.168.10.55
    auth_port = 35357
    auth_protocol = http
    admin_tenant_name = service
    admin_user = glance
    admin_password = openstacktest
  4. Update /etc/glance/glance-api.conf and /etc/glance/glance-registry.conf with:
    [DEFAULT]
    sql_connection = mysql://glance:openstacktest@192.168.10.55/glance
    [keystone_authtoken]
    auth_host = 192.168.10.55
    auth_port = 35357
    auth_protocol = http
    admin_tenant_name = service
    admin_user = glance
    admin_password = openstacktest
    [paste_deploy]
    flavor = keystone
  5. Remove Glance's SQLite database:
    rm /var/lib/glance/glance.sqlite
  6. Restart the glance-api and glance-registry services:
    service glance-api restart; service glance-registry restart
  7. Synchronize the glance database:
    glance-manage db_sync
  8. Error:
    root@openstack-sdn:~# glance-manage db_sync
    2014-08-07 16:08:42.580 8119 CRITICAL glance [-] ValueError: Tables "migrate_version" have non utf8 collation, please make sure all tables are CHARSET=utf8
  9. Solution:
    mysql> use glance
    Reading table information for completion of table and column names
    You can turn off this feature to get a quicker startup with -A
    Database changed
    mysql> alter table migrate_version convert to character set utf8 collate utf8_unicode_ci;
    Query OK, 1 row affected (0.26 sec)
    Records: 1 Duplicates: 0 Warnings: 0
    mysql> flush privileges;
    Query OK, 0 rows affected (0.00 sec)
    mysql> quit;
  10. Restart the services again to take into account the new modifications:
    service glance-registry restart; service glance-api restart
  11. To test Glance, upload the cirros cloud image and Ubuntu cloud image:
    glance image-create --name myFirstImage --is-public true --container-format bare --disk-format qcow2 --location https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img
  12. Now list the image to see what you have just uploaded:
    glance image-list

4. Neutron
  1. OpenVSwitch
    Install the openVSwitch:
    apt-get install -y openvswitch-controller openvswitch-switch openvswitch-datapath-dkms
  2. Restart openVSwitch:
    service openvswitch-switch restart
  3. Create the bridges:
    #br-int will be used for VM integration
    ovs-vsctl add-br br-int
    #br-ex is used to make VMs to access the internet
    ovs-vsctl add-br br-ex
  4. OpenVSwitch (Part2, modify network parameters)
    This will guide you to setting up the br-ex interface. Edit the eth1 in /etc/network/interfaces to become like this:
    # VM internet Access
    auto eth1
    iface eth1 inet manual
    up ifconfig $IFACE 0.0.0.0 up
    up ip link set $IFACE promisc on
    down ip link set $IFACE promisc off
    down ifconfig $IFACE down
  5. Add the eth1 to the br-ex:
    #Internet connectivity will be lost after this step but this won't affect OpenStack's work
    ovs-vsctl add-port br-ex eth1
    If you want to get internet connection back, you can assign the eth1's IP address to the br-ex in the /etc/network/interfaces file:
    auto br-ex
    iface br-ex inet static
    address 10.43.1.55
    netmask 255.255.255.0
    gateway 10.43.1.1
    dns-nameservers 10.43.1.1
  6. If you want IMMEDIATELY want your FULL networking features back I suggest:
    reboot
  7. source keystone_source (to get your environnment variables back)
  8. Neutron-*
  9. Install the Neutron components:
    apt-get install -y neutron-server neutron-plugin-openvswitch neutron-plugin-openvswitch-agent dnsmasq neutron-dhcp-agent neutron-l3-agent neutron-metadata-agent
  10. Verify all Neutron components are running:
    cd /etc/init.d/; for i in $( ls neutron-* ); do sudo service $i status; cd; done
  11. Edit /etc/neutron/api-paste.ini
    [filter:authtoken]
    paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
    auth_host = 192.168.10.55
    auth_port = 35357
    auth_protocol = http
    admin_tenant_name = service
    admin_user = neutron
    admin_password = openstacktest
  12. Edit the OVS plugin configuration file /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini with::
    #Under the database section
    [DATABASE]
    sql_connection=mysql://neutron:openstacktest@192.168.10.55/neutron
    #Under the OVS section
    [OVS]
    tenant_network_type = gre
    enable_tunneling = True
    tunnel_id_ranges = 1:1000
    integration_bridge = br-int
    tunnel_bridge = br-tun
    local_ip = 192.168.10.55
    #Firewall driver for realizing neutron security group function
    [SECURITYGROUP]
    firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
  13. Update /etc/neutron/metadata_agent.ini:
    # The Neutron user information for accessing the Neutron API.
    auth_url = http://192.168.10.55:35357/v2.0
    auth_region = RegionOne
    admin_tenant_name = service
    admin_user = neutron
    admin_password = openstacktest
    # IP address used by Nova metadata server
    nova_metadata_ip = 192.168.10.55
    # TCP Port used by Nova metadata server
    nova_metadata_port = 8775
    metadata_proxy_shared_secret = helloOpenStack
  14. Edit your /etc/neutron/neutron.conf:
    #RabbitMQ IP
    rabbit_host = 192.168.10.55
    rabbit_password = root123
    [keystone_authtoken]
    auth_host = 192.168.10.55
    auth_port = 35357
    auth_protocol = http
    admin_tenant_name = service
    admin_user = neutron
    admin_password = openstacktest
    signing_dir = /var/lib/neutron/keystone-signing
    [DATABASE]
    connection = mysql://neutron:openstacktest@192.168.10.55/neutron
    # openstack-config --set /etc/neutron/neutron.conf DEFAULT \
notify_nova_on_port_status_changes True
# openstack-config --set /etc/neutron/neutron.conf DEFAULT \
notify_nova_on_port_data_changes True
# openstack-config --set /etc/neutron/neutron.conf DEFAULT \
nova_url http://controller:8774/v2
# openstack-config --set /etc/neutron/neutron.conf DEFAULT \
nova_admin_username nova
# openstack-config --set /etc/neutron/neutron.conf DEFAULT \
nova_admin_tenant_id SERVICE_TENANT_ID
# openstack-config --set /etc/neutron/neutron.conf DEFAULT \
nova_admin_password NOVA_PASS
# openstack-config --set /etc/neutron/neutron.conf DEFAULT \
nova_admin_auth_url http://controller:35357/v2.0

  1. Edit your /etc/neutron/l3_agent.ini:
    [DEFAULT]
    interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
    use_namespaces = True
    external_network_bridge = br-ex
    signing_dir = /var/cache/neutron
    admin_tenant_name = service
    admin_user = neutron
    admin_password = openstacktest
    auth_url = http://192.168.10.55:35357/v2.0
    l3_agent_manager = neutron.agent.l3_agent.L3NATAgentWithStateReport
    root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf
    interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
  2. Edit your /etc/neutron/dhcp_agent.ini:
    [DEFAULT]
    interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
    dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
    use_namespaces = True
    signing_dir = /var/cache/neutron
    admin_tenant_name = service
    admin_user = neutron
    admin_password = openstacktest
    auth_url = http://192.168.10.55:35357/v2.0
    dhcp_agent_manager = neutron.agent.dhcp_agent.DhcpAgentWithStateReport
    root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf
    state_path = /var/lib/neutron
  3. Remove Neutron's SQLite database:
    rm /var/lib/neutron/neutron.sqlite
  4. Restart all neutron services:
    cd /etc/init.d/; for i in $( ls neutron-* ); do sudo service $i restart; cd /root/; done
    service dnsmasq restart
    and check status:
    cd /etc/init.d/; for i in $( ls neutron-* ); do sudo service $i status; cd /root/; done
    service dnsmasq status
    then check all neutron agents:
    neutron agent-list
    (hopefully you'll enjoy smiling faces :-) )

5. Nova
  1. KVM

make sure that your hardware enables virtualization:

apt-get install -y cpu-checker
kvm-ok

Finally you should get:

INFO: /dev/kvm exists
KVM acceleration can be used
  1. apt-get install -y kvm libvirt-bin pm-utils
  2. Edit the cgroup_device_acl array in the /etc/libvirt/qemu.conf file to:

cgroup_device_acl = [
"/dev/null", "/dev/full", "/dev/zero",
"/dev/random", "/dev/urandom",
"/dev/ptmx", "/dev/kvm", "/dev/kqemu",
"/dev/rtc", "/dev/hpet","/dev/net/tun"
]

  1. Delete default virtual bridge

virsh net-destroy default
virsh net-undefine default

  1. Enable live migration by updating /etc/libvirt/libvirtd.conf file:

listen_tls = 0
listen_tcp = 1
auth_tcp = "none"

  1. Edit libvirtd_opts variable in /etc/init/libvirt-bin.conf file:

env libvirtd_opts="-d -l"

  1. Edit /etc/default/libvirt-bin file

libvirtd_opts="-d -l"

  1. Restart the libvirt service and dbus to load the new values:
service dbus restart && service libvirt-bin restart
then check status:
service dbus status && service libvirt-bin status

  1. Nova-*

Start by installing nova components:

  1. apt-get install -y nova-api nova-cert novnc nova-consoleauth nova-scheduler nova-novncproxy nova-doc nova-conductor nova-compute-kvm

  1. Check the status of all nova-services:

cd /etc/init.d/; for i in $( ls nova-* ); do service $i status; cd; done

  1. Now modify authtoken section in the /etc/nova/api-paste.ini file to this:

[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
auth_host = 192.168.10.55
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = openstacktest
signing_dirname = /tmp/keystone-signing-nova
# Workaround for https://bugs.launchpad.net/nova/+bug/1154809
auth_version = v2.0

  1. Modify the /etc/nova/nova.conf like this:

[DEFAULT]
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/run/lock/nova
verbose=True
api_paste_config=/etc/nova/api-paste.ini
compute_scheduler_driver=nova.scheduler.simple.SimpleScheduler
rabbit_host=192.168.10.55
nova_url=http://192.168.10.55:8774/v1.1/
sql_connection=mysql://nova:openstacktest@192.168.10.55/nova
root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf

# Auth
use_deprecated_auth=false
auth_strategy=keystone

# Imaging service
glance_api_servers=192.168.10.55:9292
image_service=nova.image.glance.GlanceImageService

# Vnc configuration
novnc_enabled=true
novncproxy_base_url=http://10.43.1.55:6080/vnc_auto.html
novncproxy_port=6080
vncserver_proxyclient_address=192.168.10.55
vncserver_listen=0.0.0.0

# Network settings
network_api_class=nova.network.neutronv2.api.API
neutron_url=http://192.168.10.55:9696
neutron_auth_strategy=keystone
neutron_admin_tenant_name=service
neutron_admin_username=neutron
neutron_admin_password=openstacktest
neutron_admin_auth_url=http://192.168.10.55:35357/v2.0
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver
#If you want Neutron + Nova Security groups
#firewall_driver=nova.virt.firewall.NoopFirewallDriver
#security_group_api=neutron
#If you want Nova Security groups only, comment the two lines above and uncomment line -1-.
#-1-firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver

#Metadata
service_neutron_metadata_proxy = True
neutron_metadata_proxy_shared_secret = helloOpenStack
metadata_host = 192.168.10.55
metadata_listen = 0.0.0.0
metadata_listen_port = 8775

# Compute #
compute_driver=libvirt.LibvirtDriver

# Cinder #
volume_api_class=nova.volume.cinder.API
osapi_volume_listen_port=5900
cinder_catalog_info=volume:cinder:internalURL

  1. Edit the /etc/nova/nova-compute.conf:

[DEFAULT]
libvirt_type=kvm
libvirt_ovs_bridge=br-int
libvirt_vif_type=ethernet
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
libvirt_use_virtio_for_bridges=True

  1. Restart nova-* services:
cd /etc/init.d/; for i in $( ls nova-* ); do sudo service $i restart; cd /root/;done
cd /etc/init.d/; for i in $( ls nova-* ); do sudo service $i status; cd /root/;done
  1. Remove Nova's SQLite database:
rm /var/lib/nova/nova.sqlite

  1. Synchronize your database:
nova-manage db sync
  1. Restart nova-* services:
cd /etc/init.d/; for i in $( ls nova-* ); do sudo service $i restart; cd /root/;done
...and check:
cd /etc/init.d/; for i in $( ls nova-* ); do sudo service $i status; cd /root/;done
Hopefully you should enjoy smiling faces on nova-* services to confirm your installation:
nova-manage service list

6. Cinder

  1. Install the required packages:

apt-get install -y cinder-api cinder-scheduler cinder-volume iscsitarget open-iscsi iscsitarget-dkms

  1. Configure the iscsi services:
sed -i 's/false/true/g' /etc/default/iscsitarget
  1. Start the services:
service iscsitarget start
service open-iscsi start

  1. Configure /etc/cinder/api-paste.ini like the following:

[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
service_protocol = http
service_host = 10.43.1.55
service_port = 5000
auth_host = 192.168.10.55
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = cinder
admin_password = openstacktest

  1. Edit the /etc/cinder/cinder.conf to:

[DEFAULT]
rootwrap_config=/etc/cinder/rootwrap.conf
sql_connection = mysql://cinder:openstacktest@192.168.10.55/cinder
api_paste_config = /etc/cinder/api-paste.ini
iscsi_helper=ietadm
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
#osapi_volume_listen_port=5900

  1. Remove Cinder's SQLite database:
rm /var/lib/cinder/cinder.sqlite

  1. Then, synchronize your database:
cinder-manage db sync

  1. Finally, don't forget to create a volumegroup and name it cinder-volumes:

dd if=/dev/zero of=cinder-volumes bs=1 count=0 seek=2G
losetup /dev/loop2 cinder-volumes
fdisk /dev/loop2
#Type in the followings:
n
p
1
ENTER
ENTER
t
8e
w

  1. Proceed to create the physical volume then the volume group:

pvcreate /dev/loop2
vgcreate cinder-volumes /dev/loop2

  1. Restart the cinder services:
cd /etc/init.d/; for i in $( ls cinder-* ); do sudo service $i restart; cd /root/; done

Verify if cinder services are running:

cd /etc/init.d/; for i in $( ls cinder-* ); do sudo service $i status; cd /root/; done

7. Horizon

  1. To install horizon, proceed like this
    apt-get -y install openstack-dashboard memcached
  2. If you don't like the OpenStack ubuntu theme, you can remove the package to disable it:
    dpkg --purge openstack-dashboard-ubuntu-theme
    Reload Apache and memcached:
    service apache2 restart; service memcached restart
  3. You can now access your OpenStack 10.43.1.55/horizon with credentials admin:openstacktest.

Friday 13 June 2014

Glance As a Separate Node

In my previous post "OpenStack Multi-node Installation : Havana" I have given steps for configuring multi-node set up with Controller, Compute and Network nodes. In this post I will show how to separate glance node from Controller.

1.1. Preparing Ubuntu
After you install Ubuntu 12.04 or 13.04 Server 64bits, Go in sudo mode and don't leave it until the end of this guide:
sudo su
Add Havana repositories [Only for Ubuntu 12.04]:
apt-get install -y ubuntu-cloud-keyring
echo deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/havana main >> /etc/apt/sources.list.d/havana.list
Update your system:
apt-get update -y
apt-get upgrade -y
apt-get dist-upgrade -y

MySQL
apt-get install python-mysqldb 

1.2. Networking
Only one NIC should have an internet access, hence configure /etc/network/interfaces as follows:
#For Exposing OpenStack API over the internet
auto eth1
iface eth1 inet static
address 172.24.1.4
netmask 255.255.0.0
gateway 172.24.0.1
dns-nameservers 148.147.161.2
#Not internet connected(used for OpenStack management)
auto eth0
iface eth0 inet static
address 172.21.1.4
netmask 255.255.0.0
Restart the networking service:
service networking restart (# /etc/init.d/networking restart)
Preferrably reboot the machine J
1.3. Glance
We Move now to Glance installation:
apt-get install -y glance
Update /etc/glance/glance-api-paste.ini with:
[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
delay_auth_decision = true
auth_host = 172.21.1.11
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = glance
admin_password = root123
Update the /etc/glance/glance-registry-paste.ini with:
[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
auth_host = 172.21.1.11
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = glance
admin_password = root123
Update /etc/glance/glance-api.conf with:
sql_connection = mysql://glanceUser:glancePass@172.21.1.11/glance
notifier_strategy = rabbit
rabbit_host = 172.21.1.11
rabbit_userid = guest
rabbit_password = root123

[keystone_authtoken]
auth_host = 172.21.1.11
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = glance
admin_password = root123
And:
[paste_deploy]
flavor = keystone
Update the /etc/glance/glance-registry.conf with:
sql_connection = mysql://glanceUser:glancePass@172.21.1.11/glance
notifier_strategy = rabbit
rabbit_host = 172.21.1.11
rabbit_userid = guest
rabbit_password = root123

[keystone_authtoken]
auth_host = 172.21.1.11
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = glance
admin_password = root123
And:
[paste_deploy]
flavor = keystone
Remember you had run keystone-basic.sh and keystone-endpoints.sh script or some kind of script while installing keystone and filling the tables with some values on controller node. In the keystone-endpoints.sh script, we enter the address of glance endpoint which by-default we have pointed to controller node. You can see that by logging onto mysql database as root user, using database keystone, and then checking the contents of table endpoint in keystone database there will be entries for glance service endpoint as a controller node. You can check this running following command in mysql :
mysql> select url from endpoint;
 But actual glance service endpoint is your glance node which we separated from controller i.e. 172.21.1.4. In normal case, URL for glance in endpoint table contains the ip-address of controller node and you need to modify this address to point to your glance host. This can be edited by entering the below mentioned sql command:-
update endpoint set url='http://172.21.1.4:9292/v2' where service_id='<service-id>';
Once this command is entered, you will see that three rows are affected in the table and now they point to the glance host.
Next you need to update the address of glance-host in nova.conf file present on controller node, as well as on all the compute nodes. Do the change mentioned below in all hosts(controller + compute hosts).
Open /etc/nova/nova.conf and modify the flag:-

# Imaging service
glance_api_servers=172.21.1.4:9292
image_service=nova.image.glance.GlanceImageService
  
Now, Restart the glance-api and glance-registry services:
service glance-api restart; service glance-registry restart
Synchronize the glance database:
glance-manage db_sync

To test Glance, upload the cirros cloud image directly from the internet:
glance image-create --name myFirstImage --is-public true --container-format bare --disk-format qcow2 --location http://download.cirros-cloud.net/0.3.1/cirros-0.3.1-x86_64-disk.img
Now list the image to see what you have just uploaded:
glance image-list

And you are Done !!

Cheers,
Shalmali :)