1. Requirements
Node Role:
|
NICs
|
Control Node:
|
eth0
(172.21.1.11), eth1 (10.133.43.2)/ 172.24.1.2 (In fig)
|
Network Node:
|
eth0
(172.21.1.12), eth1 (172.22.1.12), eth2 (10.133.43.3)/172.24.1.3( In fig)
|
Compute Node:
|
eth0
(172.21.1.101), eth1 (172.22.1.101)
|
Note 1: Always use dpkg -s <packagename> to make sure you
are using havana
packages
Note 2: This is my current network architecture, you can add as
many compute node as you wish.
After you install Ubuntu 12.04 or 13.04 Server
64bits, Go in sudo mode and don't leave it until the end of this guide:
sudo
su
Add Havana repositories [Only for Ubuntu
12.04]:
apt-get
install -y ubuntu-cloud-keyring
echo
deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/havana
main >> /etc/apt/sources.list.d/havana.list
Update your system:
apt-get
update -y
apt-get
upgrade -y
apt-get
dist-upgrade -y
Only one NIC should have an internet access,
hence configure /etc/network/interfaces as follows:
#For
Exposing OpenStack API over the internet
auto
eth1
iface
eth1 inet static
address
10.133.43.2
netmask
255.255.0.0
gateway
172.24.0.1
dns-nameservers
148.147.161.2
#Not
internet connected(used for OpenStack management)
auto
eth0
iface
eth0 inet static
address
172.21.1.11
netmask
255.255.0.0
Restart the networking service:
service
networking restart (# /etc/init.d/networking restart)
Preferrably reboot the machine J
Install MySQL:
apt-get
install -y mysql-server python-mysqldb
Configure mysql to accept all incoming
requests:
sed
-i 's/127.0.0.1/0.0.0.0/g' /etc/mysql/my.cnf
service
mysql restart
Install RabbitMQ:
apt-get
install -y rabbitmq-server
change
the password of guest user for rabbitmq-server.
#rabbitmqctl
change_password guest root123
Install NTP service:
apt-get
install -y ntp
Create these databases:
mysql
-u root -p
#Keystone
CREATE
DATABASE keystone;
GRANT
ALL ON keystone.* TO 'keystoneUser'@'%' IDENTIFIED BY 'keystonePass';
#Glance
CREATE
DATABASE glance;
GRANT
ALL ON glance.* TO 'glanceUser'@'%' IDENTIFIED BY 'glancePass';
#Neutron
CREATE
DATABASE neutron;
GRANT
ALL ON neutron.* TO 'neutronUser'@'%' IDENTIFIED BY 'neutronPass';
#Nova
CREATE
DATABASE nova;
GRANT
ALL ON nova.* TO 'novaUser'@'%' IDENTIFIED BY 'novaPass';
#Cinder
CREATE
DATABASE cinder;
GRANT ALL ON cinder.* TO 'cinderUser'@'%' IDENTIFIED BY
'cinderPass';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY
'cinderPass';
quit;
Install other services:
apt-get
install -y vlan bridge-utils
Enable IP_Forwarding:
sed
-i 's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/' /etc/sysctl.conf
#
To save you from rebooting, perform the following
sysctl
net.ipv4.ip_forward=1
Start by the keystone packages:
apt-get
install -y keystone
Adapt the connection attribute in the
/etc/keystone/keystone.conf to the new database:
connection
= mysql://keystoneUser:keystonePass@172.21.1.11/keystone
Restart the identity service then synchronize
the database:
openssl
rand -hex 10
(
suppose number generated is # fa46c0c6c50452627599 then uncomment the
Admin_token in /etc/keystone/keystone.conf
And
make its value equal to fa46c0c6c50452627599 )
service
keystone restart
keystone-manage
db_sync
Fill up the keystone database using the two
scripts available in the Scripts folder of this git repository:
# Modify the **HOST_IP** and **EXT_HOST_IP** variables before
executing the scripts
Also modify SERVICE_TOKEN and passwords.
# Replace QUANTUM and quantum with NEUTRON and neutron
respectively.
chmod
+x keystone_basic.sh
chmod
+x keystone_endpoints_basic.sh
./keystone_basic.sh
./keystone_endpoints_basic.sh
Create a simple credential file and load it so
you won't be bothered later:
nano
creds
Modify the password.
#Paste
the following:
export
OS_TENANT_NAME=admin
export
OS_USERNAME=admin
export
OS_PASSWORD=root123
export
OS_AUTH_URL="http://10.133.43.2:5000/v2.0/"
#
Load it:
source
creds
To test Keystone, we use a simple CLI command:
keystone
user-list
We Move now to Glance installation:
apt-get
install -y glance
Update /etc/glance/glance-api-paste.ini with:
[filter:authtoken]
paste.filter_factory
= keystoneclient.middleware.auth_token:filter_factory
delay_auth_decision
= true
auth_host
= 172.21.1.11
auth_port
= 35357
auth_protocol
= http
admin_tenant_name
= service
admin_user
= glance
admin_password
= root123
Update the
/etc/glance/glance-registry-paste.ini with:
[filter:authtoken]
paste.filter_factory
= keystoneclient.middleware.auth_token:filter_factory
auth_host
= 172.21.1.11
auth_port
= 35357
auth_protocol
= http
admin_tenant_name
= service
admin_user
= glance
admin_password
= root123
Update /etc/glance/glance-api.conf with:
sql_connection
= mysql://glanceUser:glancePass@172.21.1.11/glance
And:
[paste_deploy]
flavor
= keystone
Update the /etc/glance/glance-registry.conf
with:
sql_connection
= mysql://glanceUser:glancePass@172.21.1.11/glance
And:
[paste_deploy]
flavor
= keystone
Restart the glance-api and glance-registry
services:
service
glance-api restart; service glance-registry restart
Synchronize the glance database:
glance-manage
db_sync
To test Glance, upload the cirros cloud image
directly from the internet:
glance
image-create --name myFirstImage --is-public true --container-format bare
--disk-format qcow2 --location http://download.cirros-cloud.net/0.3.1/cirros-0.3.1-x86_64-disk.img
Now list the image to see what you have just
uploaded:
glance
image-list
Install the Neutron server and the OpenVSwitch
package collection:
apt-get
install -y neutron-server
Edit the OVS plugin configuration file /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
with:
#Under
the database section
[DATABASE]
sql_connection
= mysql://neutronUser:neutronPass@172.21.1.11/neutron
#Under
the OVS section
[OVS]
tenant_network_type
= gre
tunnel_id_ranges
= 1:1000
enable_tunneling
= True
#Firewall
driver for realizing neutron security group function
[SECURITYGROUP]
firewall_driver
= neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
Edit /etc/neutron/api-paste.ini
[filter:authtoken]
paste.filter_factory
= keystoneclient.middleware.auth_token:filter_factory
auth_host
= 172.21.1.11
auth_port
= 35357
auth_protocol
= http
admin_tenant_name
= service
admin_user
= neutron
admin_password
= root123
Update the /etc/neutron/neutron.conf:
auth_strategy
= keystone
core_plugin
= neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
rpc_backend
= neutron.openstack.common.rpc.impl_kombu
rabbit_host
= 172.21.1.11
rabbit_password
= root123
[keystone_authtoken]
auth_host
= 172.21.1.11
auth_port
= 35357
auth_protocol
= http
auth_uri
= http:// 172.21.1.11:5000
auth_url
= http:// 172.21.1.11:35357/v2.0
admin_tenant_name
= service
admin_user
= neutron
admin_password
= root123
signing_dir
= /var/lib/neutron/keystone-signing
Restart the neutron server:
service
neutron-server restart
Start by installing nova components:
apt-get
install -y nova-api nova-cert novnc nova-consoleauth nova-scheduler
nova-novncproxy nova-doc nova-conductor
Now modify authtoken section in the
/etc/nova/api-paste.ini file to this:
[filter:authtoken]
paste.filter_factory
= keystoneclient.middleware.auth_token:filter_factory
auth_host
= 172.21.1.11
auth_port
= 35357
auth_protocol
= http
admin_tenant_name
= service
admin_user
= nova
admin_password
= root123
signing_dirname
= /tmp/keystone-signing-nova
#
Workaround for https://bugs.launchpad.net/nova/+bug/1154809
auth_version
= v2.0
Modify the /etc/nova/nova.conf like this:
[DEFAULT]
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/run/lock/nova
verbose=True
api_paste_config=/etc/nova/api-paste.ini
compute_scheduler_driver=nova.scheduler.simple.SimpleScheduler
rabbit_host=172.21.1.11
rabbit_password=root123
nova_url=http://172.21.1.11:8774/v1.1/
sql_connection=mysql://novaUser:novaPass@172.21.1.11/nova
root_helper=sudo
nova-rootwrap /etc/nova/rootwrap.conf
#
Auth
use_deprecated_auth=false
auth_strategy=keystone
#
Imaging service
glance_api_servers=172.21.1.11:9292
image_service=nova.image.glance.GlanceImageService
#
Vnc configuration
novnc_enabled=true
novncproxy_base_url=http://10.133.43.2:6080/vnc_auto.html
novncproxy_port=6080
vncserver_proxyclient_address=172.21.1.11
#
Network settings
network_api_class=nova.network.neutronv2.api.API
neutron_url=http://172.21.1.11:9696
neutron_auth_strategy=keystone
neutron_admin_tenant_name=service
neutron_admin_username=neutron
neutron_admin_password=root123
neutron_admin_auth_url=http://172.21.1.11:35357/v2.0
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver
#If
you want Neutron + Nova Security groups
firewall_driver=nova.virt.firewall.NoopFirewallDriver
security_group_api=neutron
#If
you want Nova Security groups only, comment the two lines above and uncomment
line -1-.
#-1-firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
#Metadata
service_neutron_metadata_proxy
= True
neutron_metadata_proxy_shared_secret
= helloOpenStack
#
Compute #
#
For KVM:
compute_driver=libvirt.LibvirtDriver
#
Cinder #
volume_api_class=nova.volume.cinder.API
osapi_volume_listen_port=5900
Synchronize your database:
nova-manage
db sync
Restart nova-* services:
cd
/etc/init.d/; for i in $( ls nova-* ); do sudo service $i restart; done
Check for the smiling faces on nova-* services
to confirm your installation:
nova-manage
service list
Output should look like:
Binary Host Zone Status State Updated_At
nova-consoleauth
os-controller
internal enabled :-)
2013-10-28 05:52:10
nova-conductor os-controller internal enabled :-)
2013-10-28 05:52:10
nova-cert os-controller internal enabled :-)
2013-10-28 05:52:10
nova-scheduler os-controller internal enabled :-)
2013-10-28 05:52:10
Install the required packages:
apt-get
install -y cinder-api cinder-scheduler cinder-volume iscsitarget open-iscsi
iscsitarget-dkms
Configure the iscsi services:
sed
-i 's/false/true/g' /etc/default/iscsitarget
Restart
the services:
service
iscsitarget start
service
open-iscsi start
Note: You may also need to build iscsi target if you get “FATAL: Module iscsi_trgt not found”
apt-get install module-assistant
m-a a-i iscsitarge
modprobe iscsi_trgt
IF this does not remove the FATAL error try this:
apt-get install --reinstall iscsitarget-dkms
Configure
/etc/cinder/api-paste.ini like the following:
[filter:authtoken]
paste.filter_factory
= keystoneclient.middleware.auth_token:filter_factory
service_protocol
= http
service_host
= 10.133.43.2
service_port
= 5000
auth_host
= 172.21.1.11
auth_port
= 35357
auth_protocol
= http
admin_tenant_name
= service
admin_user
= cinder
admin_password
= root123
signing_dir
= /var/lib/cinder
Edit the /etc/cinder/cinder.conf
to:
[DEFAULT]
rootwrap_config=/etc/cinder/rootwrap.conf
sql_connection
= mysql://cinderUser:cinderPass@172.21.1.11/cinder
api_paste_config
= /etc/cinder/api-paste.ini
iscsi_helper=ietadm
volume_name_template
= volume-%s
volume_group
= cinder-volumes
verbose
= True
auth_strategy
= keystone
iscsi_ip_address=172.21.1.11
Then, synchronize your
database:
cinder-manage
db sync
Finally, don't forget
to create a volumegroup and name it cinder-volumes:
dd
if=/dev/zero of=cinder-volumes bs=1 count=0 seek=2G
losetup
/dev/loop2 cinder-volumes
fdisk
/dev/loop2
#Type
in the followings:
n
p
1
ENTER
ENTER
t
8e
w
Proceed to create the
physical volume then the volume group:
pvcreate
/dev/loop2
vgcreate
cinder-volumes /dev/loop2
Note: Beware that this volume group gets lost
after a system reboot. (Click Here to know how to load it after a
reboot)
Restart the cinder services:
cd
/etc/init.d/; for i in $( ls cinder-* ); do sudo service $i restart; done
Verify if cinder
services are running:
cd
/etc/init.d/; for i in $( ls cinder-* ); do sudo service $i status; done
To install horizon,
proceed like this
apt-get
install -y openstack-dashboard memcached
If you don't like the
OpenStack ubuntu theme, you can remove the package to disable it:
dpkg
--purge openstack-dashboard-ubuntu-theme
Reload Apache and
memcached:
service
apache2 restart; service memcached restart
Check OpenStack
Dashboard at http://10.133.43.2/horizon. We can login with
the admin / root123
/etc/openstack-dashboard/local_settings.py:
OPENSTACK_HOST = "10.133.43.2"
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v2.0" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "admin"
And then restart apache
service apache2 restart; service memcached restart
After you install
Ubuntu 12.04 or 13.04 Server 64bits, Go in sudo mode:
sudo
su
Add Havana
repositories [Only for Ubuntu 12.04]:
apt-get
install -y ubuntu-cloud-keyring
echo
deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/havana
main >> /etc/apt/sources.list.d/havana.list
Update your system:
apt-get
update -y
apt-get
upgrade -y
apt-get
dist-upgrade -y
Install ntp service:
apt-get
install -y ntp
Configure the NTP
server to follow the controller node:
#Comment
the ubuntu NTP servers
sed
-i 's/server 0.ubuntu.pool.ntp.org/#server 0.ubuntu.pool.ntp.org/g'
/etc/ntp.conf
sed
-i 's/server 1.ubuntu.pool.ntp.org/#server 1.ubuntu.pool.ntp.org/g'
/etc/ntp.conf
sed
-i 's/server 2.ubuntu.pool.ntp.org/#server 2.ubuntu.pool.ntp.org/g'
/etc/ntp.conf
sed
-i 's/server 3.ubuntu.pool.ntp.org/#server 3.ubuntu.pool.ntp.org/g'
/etc/ntp.conf
#Set
the network node to follow up your controller node
sed
-i 's/server ntp.ubuntu.com/server 172.21.1.11/g' /etc/ntp.conf
service
ntp restart
Install other
services:
apt-get
install -y vlan bridge-utils
Enable IP_Forwarding:
sed
-i 's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/' /etc/sysctl.conf
# To save you
from rebooting, perform the following
sysctl
net.ipv4.ip_forward=1
Edit the
/etc/sysctl.conf file, as follows:
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
# sysctl
-p
3 NICs must be
present:
#
OpenStack management
auto
eth0
iface
eth0 inet static
address
172.21.1.12
netmask
255.255.0.0
#
VM Configuration
auto
eth1
iface
eth1 inet static
address
172.22.1.12
netmask
255.255.0.0
#
VM internet Access
auto
eth2
iface
eth2 inet static
address
10.133.43.3
gateway
172.24.0.1
dns-nameserver
148.147.161.2
netmask
255.255.0.0
Restart the networking
service:
service
networking restart (# /etc/init.d/networking restart)
Preferrably reboot the machine J
Install the
openVSwitch:
apt-get
install -y openvswitch-switch openvswitch-datapath-dkms
Start
Open vSwitch:
#
service openvswitch-switch restart
Create the bridges:
#br-int
will be used for VM integration
ovs-vsctl
add-br br-int
#br-ex
is used to make to VM accessible from the internet
ovs-vsctl
add-br br-ex
Install the Neutron
openvswitch agent, l3 agent and dhcp agent:
apt-get
-y install neutron-plugin-openvswitch-agent neutron-dhcp-agent neutron-l3-agent
neutron-metadata-agent
Edit
/etc/neutron/api-paste.ini:
[filter:authtoken]
paste.filter_factory
= keystoneclient.middleware.auth_token:filter_factory
auth_host
= 172.21.1.11
auth_port
= 35357
auth_protocol
= http
admin_tenant_name
= service
admin_user
= neutron
admin_password
= root123
Edit the OVS plugin
configuration file /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
with:
#Under
the database section
[DATABASE]
sql_connection
= mysql://neutronUser:neutronPass@172.21.1.11/neutron
#Under
the OVS section
[OVS]
tenant_network_type = gre
tunnel_id_ranges = 1:1000
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip
= 172.22.1.12
enable_tunneling
= True
#Firewall
driver for realizing neutron security group function
[SECURITYGROUP]
firewall_driver
= neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
Update
/etc/neutron/metadata_agent.ini:
#
The Neutron user information for accessing the Neutron API.
auth_url
= http://172.21.1.11:35357/v2.0
auth_region
= RegionOne
admin_tenant_name
= service
admin_user
= neutron
admin_password
= root123
#
IP address used by Nova metadata server
nova_metadata_ip
= 172.21.1.11
#
TCP Port used by Nova metadata server
nova_metadata_port
= 8775
metadata_proxy_shared_secret
= helloOpenStack
Make sure that your
rabbitMQ IP in /etc/neutron/neutron.conf is set to the controller node:
rabbit_host
= 172.21.1.11
rabbit_password
= root123
auth_strategy
= keystone
core_plugin
= neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
[database]
connection
= mysql://neutronUser:neutronPass@172.21.1.11/neutron
#And
update the keystone_authtoken section
[keystone_authtoken]
auth_host
= 172.21.1.11
auth_port
= 35357
auth_protocol
= http
admin_tenant_name
= service
admin_user
= neutron
admin_password
= root123
signing_dir
= /var/lib/neutron/keystone-signing
Again, the sqlite connection should be disabled.
Edit
/etc/sudoers.d/neutron_sudoers to give it full access like this (This is
unfortunately mandatory)
nano
/etc/sudoers.d/neutron_sudoers
#Modify
the neutron user
neutron
ALL=NOPASSWD: ALL
Edit /etc/neutron/l3_agent.ini
interface_driver
= neutron.agent.linux.interface.OVSInterfaceDriver
use_namespaces
= True
Edit /etc/neutron/dhcp_agent.ini
interface_driver
= neutron.agent.linux.interface.OVSInterfaceDriver
use_namespaces
= False
dhcp_driver
= neutron.agent.linux.dhcp.Dnsmasq
Restart all the
services:
cd
/etc/init.d/; for i in $( ls neutron-* ); do sudo service $i restart; done
Edit the eth2 in
/etc/network/interfaces to become like this:
#
VM internet Access
auto
eth2
iface
eth2 inet manual
up
ifconfig $IFACE 0.0.0.0 up
up
ip link set $IFACE promisc on
down
ip link set $IFACE promisc off
down
ifconfig $IFACE down
Add the eth2 to the
br-ex:
#Internet
connectivity will be lost after this step but this won't affect OpenStack's
work
ovs-vsctl
add-port br-ex eth2
#If
you want to get internet connection back, you can assign the eth2's IP address
to the br-ex in the /etc/network/interfaces file.
After you install
Ubuntu 12.04 or 13.04 Server 64bits, Go in sudo mode:
sudo
su
Add Havana
repositories [Only for Ubuntu 12.04]:
apt-get
install -y ubuntu-cloud-keyring
echo
deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/havana
main >> /etc/apt/sources.list.d/havana.list
Update your system:
apt-get
update -y
apt-get
upgrade -y
apt-get
dist-upgrade -y
Reboot (you might have new kernel)
Install ntp service:
apt-get
install -y ntp
Configure the NTP
server to follow the controller node:
#Comment
the ubuntu NTP servers
sed
-i 's/server 0.ubuntu.pool.ntp.org/#server 0.ubuntu.pool.ntp.org/g'
/etc/ntp.conf
sed
-i 's/server 1.ubuntu.pool.ntp.org/#server 1.ubuntu.pool.ntp.org/g'
/etc/ntp.conf
sed
-i 's/server 2.ubuntu.pool.ntp.org/#server 2.ubuntu.pool.ntp.org/g'
/etc/ntp.conf
sed
-i 's/server 3.ubuntu.pool.ntp.org/#server 3.ubuntu.pool.ntp.org/g'
/etc/ntp.conf
#Set
the compute node to follow up your conroller node
sed
-i 's/server ntp.ubuntu.com/server 172.21.1.11/g' /etc/ntp.conf
service
ntp restart
Install other
services:
apt-get
install -y vlan bridge-utils
Enable IP_Forwarding:
sed
-i 's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/' /etc/sysctl.conf
#
To save you from rebooting, perform the following
sysctl
net.ipv4.ip_forward=1
Edit the /etc/sysctl.conf file
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
#
sysctl -p
Perform the following:
#
OpenStack management
auto
eth0
iface
eth0 inet static
address
172.21.1.101
netmask
255.255.0.0
gateway
172.21.1.0
dns-nameservers
148.147.161.2
#
VM Configuration
auto
eth1
iface
eth1 inet static
address
172.22.1.101
netmask
255.255.0.0
Restart the networking
service:
service
networking restart (# /etc/init.d/networking restart)
Preferrably reboot the machine J
make sure that your
hardware enables virtualization:
apt-get
install -y cpu-checker
kvm-ok
Normally you would get
a good response. Now, move to install kvm and configure it:
apt-get
install -y kvm libvirt-bin pm-utils
Edit the
cgroup_device_acl array in the /etc/libvirt/qemu.conf file to:
cgroup_device_acl
= [
"/dev/null",
"/dev/full", "/dev/zero",
"/dev/random",
"/dev/urandom",
"/dev/ptmx",
"/dev/kvm", "/dev/kqemu",
"/dev/rtc",
"/dev/hpet","/dev/net/tun"
]
Delete default virtual
bridge
virsh
net-destroy default
virsh
net-undefine default
Enable live migration
by updating /etc/libvirt/libvirtd.conf file:
listen_tls
= 0
listen_tcp
= 1
auth_tcp
= "none"
Edit libvirtd_opts
variable in /etc/init/libvirt-bin.conf file:
env
libvirtd_opts="-d -l"
Edit
/etc/default/libvirt-bin file
libvirtd_opts="-d
-l"
Restart the libvirt
service and dbus to load the new values:
service
dbus restart && service libvirt-bin restart
Install the
openVSwitch:
apt-get
install -y openvswitch-switch openvswitch-datapath-dkms
·
Note: If this fails / shows errors do:
o apt-get install linux-headers-3.2.0-23-generic
o apt-get install --reinstall -y
openvswitch-switch openvswitch-datapath-dkms
Create the bridges:
#br-int
will be used for VM integration
ovs-vsctl
add-br br-int
Install the Neutron
openvswitch agent:
apt-get
-y install neutron-plugin-openvswitch-agent
Edit the OVS plugin
configuration file /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
with:
#Under
the database section
[DATABASE]
sql_connection
= mysql://neutronUser:neutronPass@172.21.1.11/neutron
#Under
the OVS section
[OVS]
tenant_network_type
= gre
tunnel_id_ranges
= 1:1000
integration_bridge
= br-int
tunnel_bridge
= br-tun
local_ip
= 172.22.1.101
enable_tunneling
= True
#Firewall
driver for realizing neutron security group function
[SECURITYGROUP]
firewall_driver
= neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
Make sure that your
rabbitMQ IP in /etc/neutron/neutron.conf is set to the controller node:
rabbit_host
= 172.21.1.11
rabbit_password
= root123
auth_strategy
= keystone
rpc_backend
= neutron.openstack.common.rpc.impl_kombu
core_plugin
= neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
[database]
connection
= mysql://neutronUser:neutronPass@172.21.1.11/neutron
#And
update the keystone_authtoken section
[keystone_authtoken]
auth_host
= 172.21.1.11
auth_port
= 35357
auth_protocol
= http
auth_url
= http://172.21.1.11:35357/v2.0
auth_uri
= http://172.21.1.11:5000
admin_tenant_name
= service
admin_user
= neutron
admin_password
= root123
signing_dir
= /var/lib/neutron/keystone-signing
[filter:authtoken]
paste.filter_factory
= keystoneclient.middleware.auth_token:filter_factory
auth_host
= 172.21.1.11
admin_tenant_name
= service
admin_user
= neutron
admin_password
= root123
Restart all the
services:
service
neutron-plugin-openvswitch-agent restart
# service
openvswitch-switch restart
4.6. Nova
Install nova's
required components for the compute node:
apt-get
install -y nova-compute-kvm
Now modify authtoken
section in the /etc/nova/api-paste.ini file to this:
[filter:authtoken]
paste.filter_factory
= keystoneclient.middleware.auth_token:filter_factory
auth_host
= 172.21.1.11
auth_port
= 35357
auth_protocol
= http
admin_tenant_name
= service
admin_user
= nova
admin_password
= root123
signing_dirname
= /tmp/keystone-signing-nova
#
Workaround for https://bugs.launchpad.net/nova/+bug/1154809
auth_version
= v2.0
·
More information on when to use what in
nova-compute.conf and nova.conf
Excerpt from bug:
Anne Gentle (annegentle) wrote on
2012-11-30:
|
File where this text
is found is: doc/src/docbkx/openstack-install/compute-minimum-configuration.xml
Best fix for this doc
bug is to change the sentence "The hypervisor is set either by editing
/etc/nova/nova.conf or referring to nova-compute.conf in the nova.conf file.
" to "The hypervisor is set by editing /etc/nova/nova.conf. "
then
adding a Note:
"You can also
configure the nova-compute service (and configure a
hypervisor-per-compute-node) with a separate nova-compute.conf file and then
referring to nova-compute.conf in the nova.conf file."
Edit
/etc/nova/nova-compute.conf file
[DEFAULT]
libvirt_type=kvm
libvirt_ovs_bridge=br-int
libvirt_vif_type=ethernet
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
libvirt_use_virtio_for_bridges=True
·
Modify the
/etc/nova/nova.conf like this:
[DEFAULT]
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/run/lock/nova
verbose=True
api_paste_config=/etc/nova/api-paste.ini
compute_scheduler_driver=nova.scheduler.simple.SimpleScheduler
rabbit_host=172.21.1.11
rabbit_password=root123
nova_url=http://172.21.1.11:8774/v1.1/
sql_connection=mysql://novaUser:novaPass@172.21.1.11/nova
root_helper=sudo
nova-rootwrap /etc/nova/rootwrap.conf
#
Auth
use_deprecated_auth=false
auth_strategy=keystone
#
Imaging service
glance_api_servers=172.21.1.11:9292
image_service=nova.image.glance.GlanceImageService
#
Vnc configuration
novnc_enabled=true
novncproxy_base_url=http://10.133.43.2:6080/vnc_auto.html
novncproxy_port=6080
vncserver_proxyclient_address=172.21.1.101
vncserver_listen=0.0.0.0
#
Network settings
network_api_class=nova.network.neutronv2.api.API
neutron_url=http://172.21.1.11:9696
neutron_auth_strategy=keystone
neutron_admin_tenant_name=service
neutron_admin_username=neutron
neutron_admin_password=root123
neutron_admin_auth_url=http://172.21.1.11:35357/v2.0
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver
#If
you want Neutron + Nova Security groups
firewall_driver=nova.virt.firewall.NoopFirewallDriver
security_group_api=neutron
#If
you want Nova Security groups only, comment the two lines above and uncomment
line -1-.
#-1-firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
#Metadata
service_neutron_metadata_proxy
= True
neutron_metadata_proxy_shared_secret
= helloOpenStack
#
Compute #
#
For KVM use:
compute_driver=libvirt.LibvirtDriver
#
Cinder #
volume_api_class=nova.volume.cinder.API
osapi_volume_listen_port=5900
cinder_catalog_info=volume:cinder:internalURL
Restart nova-* services:
cd
/etc/init.d/; for i in $( ls nova-* ); do sudo service $i restart; done
Check for the smiling
faces on nova-* services to confirm your installation:
nova-manage
service list
If VM can’t be pinged,
Add this security rules to make your VMs
pingable:
nova --no-cache secgroup-add-rule default icmp -1 -1 0.0.0.0/0
nova --no-cache secgroup-add-rule default tcp 22 22 0.0.0.0/0
No comments:
Post a Comment