Nova est la stack de gestion des computes sur OpenStack. Son rôle est de communiquer avec différents hyperviseurs pour permettre d’exécuter les instances demandées par l’utilisateur. Pour plus de détails, voir directement le projet sur openstack.org.
Context de l’exemple
Dans le cadre d’un POC, nous allons essentiellement nous servir de machines virtuelles x86. Dans ce test en particuliers, nous ajouterons un compute physique pour permettre d’avoir des configurations différentes d’un compute à l’autre.
+--------------------------+ +--------------------------+ | +-------------------------+ +--------------------------+ | | | | | | | | | | | | | | | controller01 | | compute01 | | | | | | | | | | | | | |-+ | eth0 eth2 eth3 | | eth3 eth2 eth0 |-+ +-------------------------+ +--------------------------+ | | | | | | | | | | | | | | | Réseau "privé" | | | | | +--------------------+ | | | | Réseau "public" | | | +------------------------------------+ | | Réseau "admin" | +-------------------------------------------------------+
Nous allons voir comment se comporte Nova lors de l’instentiation en fonction des paramètres passé de manière à approcher les concepts de zone et de politique d’allocation. Nous vérons ensuite le cas d’une live-migration d’un compute à l’autre.
Prérequis
Comme les méthodes de déploiement d’OpenStack varient beaucoup, nous n’allons pas détailler ici comment mettre en place le service. Nous partons du principe que le service est “up and running”. Nous avons donc un service qui fonctionne avec les stack suivantes : keystone, glance, nova, horizon, cinder et neutron. Nous avons également les droits d’administration du service chargés dans l’environnement de la ligne de commande.
[root@hostnamedab ~(keystone_admin)]# keystone service-list +----------------------------------+------------+----------------+----------------------------+ | id | name | type | description | +----------------------------------+------------+----------------+----------------------------+ | b0bee0b0e9f34f8bafd4ba7d54ba3d6e | ceilometer | metering | Openstack Metering Service | | 2a06e498c2b84cb48ebd578f6fa48297 | cinder | volume | Cinder Service | | 14fa9ec07e34443bba5daac33266671f | cinder_v2 | volumev2 | Cinder Service v2 | | 1f4e441ee6d5489281d3aa8d64e2a746 | glance | image | Openstack Image Service | | d189a66300e04e9b8ac8cacad3eca3a1 | heat | orchestration | Heat API | | f96774576d8846d7bdd04ec9ccefabb5 | heat-cfn | cloudformation | Heat CloudFormation API | | 9365681a0e3945e2806e83d85b8319bf | keystone | identity | OpenStack Identity Service | | f13396b4b11c45baa59f9de685f25020 | neutron | network | Neutron Networking Service | | 6cf6626654b04b89a988483fb566508d | nova | compute | Openstack Compute Service | | f783eff435804e449d529ef6d03745bf | nova_ec2 | ec2 | EC2 Service | +----------------------------------+------------+----------------+----------------------------+ [root@hostnamedab nova(keystone_admin)]# nova service-list +------------------+-------------+----------+---------+-------+----------------------------+-----------------+ | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +------------------+-------------+----------+---------+-------+----------------------------+-----------------+ | nova-consoleauth | hostnamedab | internal | enabled | up | 2014-04-15T10:05:12.000000 | - | | nova-scheduler | hostnamedab | internal | enabled | up | 2014-04-15T10:05:10.000000 | - | | nova-conductor | hostnamedab | internal | enabled | up | 2014-04-15T10:05:15.000000 | - | | nova-cert | hostnamedab | internal | enabled | up | 2014-04-15T10:05:14.000000 | - | | nova-compute | hostnamedbj | nova | enabled | up | 2014-04-15T10:05:14.000000 | - | | nova-console | hostnamedab | internal | enabled | down | 2014-02-26T09:30:20.000000 | - | | nova-compute | hostnamecup | nova | enabled | up | 2014-04-15T10:05:13.000000 | - | | nova-compute | hostnamedbu | nova | enabled | up | 2014-04-15T10:05:07.000000 | - | +------------------+-------------+----------+---------+-------+----------------------------+-----------------+
Composition de la stack Nova
Nova a une architecture modulaire :
- nova-api : ce service porte l’API
- nova-cert : ce service a en charge la gestion des certificats
- nova-conductor : c’est une couche d’abstration à la base de données pour les processus nova-compute
- nova-console : ce service gère les accès consoles
- nova-consoleauth : il gère l’authentification pour les accès aux consoles
- nova-metadata-api : ce service fournit les informations utilisées par les instances lors de l’exécution de cloud-init par exemple
- nova-novncproxy : gestion de connexion VNC à travers une websocket HTML5
- nova-scheduler : service de prise de décision pour la répartition des instances au démarrage
- nova-spicehtml5proxy :
- nova-xvpvncproxy : autre proxy VNC basé sur Java
- nova-compute : ce service est le plus important, il fait la liaison entre des instructions nova et celles de l’hyperviseur
En fonction du rôle des noeuds, certaines parties devront être présentes ou non suivant les cas.
Configuration
Ce n’est pas le lieu ici pour discuter de la configuration, mais voici tout de même un extrait des éléments les plus importants pour permettre de resituer le contexte.
[root@hostnamedab ~]# cat /etc/nova/nova.conf | grep -v "^#" |grep -v "^$" [DEFAULT] state_path=/var/lib/nova enabled_apis=ec2,osapi_compute,metadata ec2_listen=0.0.0.0 osapi_compute_listen=0.0.0.0 osapi_compute_workers=2 metadata_listen=0.0.0.0 service_down_time=60 rootwrap_config=/etc/nova/rootwrap.conf auth_strategy=keystone use_forwarded_for=False service_neutron_metadata_proxy=True neutron_metadata_proxy_shared_secret=patapouf neutron_default_tenant_id=default novncproxy_host=0.0.0.0 novncproxy_port=6080 glance_api_servers=192.168.41.129:9292 network_api_class=nova.network.neutronv2.api.API metadata_host=192.168.41.129 neutron_url=http://192.168.41.129:9696 neutron_url_timeout=30 neutron_admin_username=neutron neutron_admin_password=patapouf neutron_admin_tenant_name=services neutron_region_name=RegionOne neutron_admin_auth_url=http://192.168.41.129:35357/v2.0 neutron_auth_strategy=keystone neutron_ovs_bridge=br-int neutron_extension_sync_interval=600 security_group_api=neutron lock_path=/var/lib/nova/tmp debug=True verbose=True use_syslog=False rpc_backend=nova.openstack.common.rpc.impl_qpid qpid_hostname=192.168.41.129 qpid_port=5672 qpid_username=guest qpid_password=guest qpid_heartbeat=60 qpid_protocol=tcp qpid_tcp_nodelay=True cpu_allocation_ratio=16.0 ram_allocation_ratio=1.5 scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,CoreFilter firewall_driver=nova.virt.firewall.NoopFirewallDriver libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver libvirt_use_virtio_for_bridges=True vnc_keymap=en-us volume_api_class=nova.volume.cinder.API qpid_reconnect_interval=0 qpid_reconnect_interval_min=0 qpid_reconnect=True sql_connection=mysql://nova:patapouf@192.168.41.129/nova qpid_reconnect_timeout=0 image_service=nova.image.glance.GlanceImageService logdir=/var/log/nova qpid_reconnect_interval_max=0 qpid_reconnect_limit=0 osapi_volume_listen=0.0.0.0 [hyperv] [zookeeper] [osapi_v3] [conductor] [keymgr] [cells] [database] [image_file_url] [baremetal] [rpc_notifier2] [matchmaker_redis] [ssl] [trusted_computing] [upgrade_levels] [matchmaker_ring] [vmware] [spice] [keystone_authtoken] admin_tenant_name=services admin_user=nova admin_password=patapouf auth_host=192.168.41.129 auth_port=35357 auth_protocol=http auth_uri=http://192.168.41.129:5000/
Utilisation des “Filters”
Pour rappel, les filtres activé sont :
[root@hostnamedab nova(keystone_admin)]# grep "^scheduler_default_filters" /etc/nova/nova.conf scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,CoreFilter
Dans un premier temps nous allons créer 4 instances sans spécifier de filtre. Nova-scheduler va donc appliquer les filtres qui lui sont connus puis appliquer les poids.
[root@hostnamedab nova(keystone_admin)]# nova --os-tenant-name admin boot --image cirros-3.2 --flavor 2 --nic net-id=00bcfcc4-236e-40bd-ba54-74c85ae0d05e inst001 +--------------------------------------+---------------------------------------------------+ | Property | Value | +--------------------------------------+---------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-SRV-ATTR:host | - | | OS-EXT-SRV-ATTR:hypervisor_hostname | - | | OS-EXT-SRV-ATTR:instance_name | instance-00000049 | | OS-EXT-STS:power_state | 0 | | OS-EXT-STS:task_state | scheduling | | OS-EXT-STS:vm_state | building | | OS-SRV-USG:launched_at | - | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | adminPass | Hc3tpFHWQHJB | | config_drive | | | created | 2014-04-16T14:53:17Z | | flavor | m1.small (2) | | hostId | | | id | f0b85d27-240c-4bfd-a748-8930d693595b | | image | cirros-3.2 (38de0608-74fd-47c3-8839-e0d839711171) | | key_name | - | | metadata | {} | | name | inst001 | | os-extended-volumes:volumes_attached | [] | | progress | 0 | | security_groups | default | | status | BUILD | | tenant_id | 5f8ffb039ce844bc94ba031be85e0936 | | updated | 2014-04-16T14:53:18Z | | user_id | ab1435cbeb5d46829299525fc4b37c7d | +--------------------------------------+---------------------------------------------------+ [root@hostnamedab nova(keystone_admin)]# nova --os-tenant-name admin boot --image cirros-3.2 --flavor 2 --nic net-id=00bcfcc4-236e-40bd-ba54-74c85ae0d05e inst002 ... [root@hostnamedab nova(keystone_admin)]# nova --os-tenant-name admin boot --image cirros-3.2 --flavor 2 --nic net-id=00bcfcc4-236e-40bd-ba54-74c85ae0d05e inst003 ... [root@hostnamedab nova(keystone_admin)]# nova --os-tenant-name admin boot --image cirros-3.2 --flavor 2 --nic net-id=00bcfcc4-236e-40bd-ba54-74c85ae0d05e inst004 ... [root@hostnamedab nova(keystone_admin)]# nova list --fields name,host +--------------------------------------+---------+-------------+ | ID | Name | Host | +--------------------------------------+---------+-------------+ | f0b85d27-240c-4bfd-a748-8930d693595b | inst001 | hostnamecup | | 72b9ae91-bbd8-4c73-9f05-a7f2a169adea | inst002 | hostnamecup | | d7fe44bb-c2b0-48a0-8498-2a328528e7b2 | inst003 | hostnamecup | | 7b632fb6-bc96-42fb-9b16-355348ae66a4 | inst004 | hostnamecup | +--------------------------------------+---------+-------------+
Nous avons donc bien 4 instances sur le même hyperviseur qui est le plus “costaud”. Avant de poursuivre, on fait le ménage et on supprime toutes les instances.
Nous allons changer la gestion des filtre et spécifier quelques contraintes. La liste des filtres disponibles se trouve ici et la documentation ici.
Ajoutons le filtre ”’JsonFilter”’.
[root@hostnamedab nova(keystone_admin)]# grep "^scheduler_default_filters" /etc/nova/nova.conf scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,CoreFilter,JsonFilter [root@hostnamedab nova(keystone_admin)]# /etc/init.d/openstack-nova-scheduler restart Stopping openstack-nova-scheduler: [ OK ] Starting openstack-nova-scheduler: [ OK ]
On renouvelle l’opération de création de 4 instances mais en appliquant la contrainte suivante : sélectionne les hosts avec 0 vpus utilisés.
[root@hostnamedab nova(keystone_admin)]# nova --os-tenant-name admin boot --image cirros-3.2 --flavor 2 --nic net-id=00bcfcc4-236e-40bd-ba54-74c85ae0d05e --hint query='["=","$vcpus_used",0]' inst001 ... [root@hostnamedab nova(keystone_admin)]# nova --os-tenant-name admin boot --image cirros-3.2 --flavor 2 --nic net-id=00bcfcc4-236e-40bd-ba54-74c85ae0d05e --hint query='["=","$vcpus_used",0]' inst002 ... [root@hostnamedab nova(keystone_admin)]# nova --os-tenant-name admin boot --image cirros-3.2 --flavor 2 --nic net-id=00bcfcc4-236e-40bd-ba54-74c85ae0d05e --hint query='["=","$vcpus_used",0]' inst003 ... [root@hostnamedab nova(keystone_admin)]# nova --os-tenant-name admin boot --image cirros-3.2 --flavor 2 --nic net-id=00bcfcc4-236e-40bd-ba54-74c85ae0d05e --hint query='["=","$vcpus_used",0]' inst004 ... [root@hostnamedab nova(keystone_admin)]# nova list --fields name,host +--------------------------------------+---------+-------------+ | ID | Name | Host | +--------------------------------------+---------+-------------+ | 87b3d4af-d116-4090-8a51-09b83adf57ec | inst001 | hostnamecup | | 0559bbbd-ed5e-40d8-95cb-bd91012bd90d | inst002 | hostnamedbu | | 7183cacf-53f6-4d05-8a77-acc6293b5dc8 | inst003 | hostnamedbj | | 9cffcce7-9a10-48ea-8613-68bc984a31f5 | inst004 | None | +--------------------------------------+---------+-------------+ [root@hostnamedab nova(keystone_admin)]# nova show inst004 +--------------------------------------+------------------------------------------------------------------------------------------+ | Property | Value | +--------------------------------------+------------------------------------------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-SRV-ATTR:host | - | | OS-EXT-SRV-ATTR:hypervisor_hostname | - | | OS-EXT-SRV-ATTR:instance_name | instance-00000050 | | OS-EXT-STS:power_state | 0 | | OS-EXT-STS:task_state | - | | OS-EXT-STS:vm_state | error | | OS-SRV-USG:launched_at | - | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | config_drive | | | created | 2014-04-16T15:14:54Z | | fault | {"message": "No valid host was found. ", "code": 500, "created": "2014-04-16T15:14:55Z"} | | flavor | m1.small (2) | | hostId | | | id | 9cffcce7-9a10-48ea-8613-68bc984a31f5 | | image | cirros-3.2 (38de0608-74fd-47c3-8839-e0d839711171) | | key_name | - | | metadata | {} | | name | inst004 | | os-extended-volumes:volumes_attached | [] | | status | ERROR | | tenant_id | 5f8ffb039ce844bc94ba031be85e0936 | | updated | 2014-04-16T15:14:55Z | | user_id | ab1435cbeb5d46829299525fc4b37c7d | +--------------------------------------+------------------------------------------------------------------------------------------+
Nous avons bien 3 instance sur 3 hosts différents mais la 4e instance n’a pas pu être démarrée pour cause de ”No valid host was found” ce qui correspond bien à notre contrainte.
Utilisation des “Availability Zones”
Remarque préliminaire : Depuis Grizzly, une AZ (availability zone) est un agregat de host particulier dont la zone metadata contient availability_zone=maZone.
Commençons par créer deux AZ.
[root@hostnamedab ~(keystone_admin)]# nova aggregate-create hypervisor-ph Lyon +----+---------------+-------------------+-------+--------------------------+ | Id | Name | Availability Zone | Hosts | Metadata | +----+---------------+-------------------+-------+--------------------------+ | 2 | hypervisor-ph | Lyon | | 'availability_zone=Lyon' | +----+---------------+-------------------+-------+--------------------------+ [root@hostnamedab ~(keystone_admin)]# nova aggregate-create hypervisor-ph Lille +----+---------------+-------------------+-------+---------------------------+ | Id | Name | Availability Zone | Hosts | Metadata | +----+---------------+-------------------+-------+---------------------------+ | 3 | hypervisor-ph | Lille | | 'availability_zone=Lille' | +----+---------------+-------------------+-------+---------------------------+
Nous allons maintenant ajouter les hosts dans les AZ créées.
[root@hostnamedab ~(keystone_admin)]# nova aggregate-add-host hypervisor-ph hostnamecup Host hostnamecup has been successfully added for aggregate 3 +----+---------------+-------------------+---------------+---------------------------+ | Id | Name | Availability Zone | Hosts | Metadata | +----+---------------+-------------------+---------------+---------------------------+ | 3 | hypervisor-ph | Lille | 'hostnamecup' | 'availability_zone=Lille' | +----+---------------+-------------------+---------------+---------------------------+ [root@hostnamedab ~(keystone_admin)]# nova aggregate-add-host hypervisor-vm hostnamedbj Host hostnamedbj has been successfully added for aggregate 1 +----+---------------+-------------------+---------------+--------------------------+ | Id | Name | Availability Zone | Hosts | Metadata | +----+---------------+-------------------+---------------+--------------------------+ | 1 | hypervisor-vm | Lyon | 'hostnamedbj' | 'availability_zone=Lyon' | +----+---------------+-------------------+---------------+--------------------------+ [root@hostnamedab ~(keystone_admin)]# nova aggregate-add-host hypervisor-vm hostnamedbu Host hostnamedbu has been successfully added for aggregate 1 +----+---------------+-------------------+------------------------------+--------------------------+ | Id | Name | Availability Zone | Hosts | Metadata | +----+---------------+-------------------+------------------------------+--------------------------+ | 1 | hypervisor-vm | Lyon | 'hostnamedbj', 'hostnamedbu' | 'availability_zone=Lyon' | +----+---------------+-------------------+------------------------------+--------------------------+ [root@hostnamedab ~(keystone_admin)]# nova availability-zone-list +-----------------------+----------------------------------------+ | Name | Status | +-----------------------+----------------------------------------+ | internal | available | | |- hostnamedab | | | | |- nova-conductor | enabled :-) 2014-04-17T14:17:46.000000 | | | |- nova-consoleauth | enabled :-) 2014-04-17T14:17:46.000000 | | | |- nova-scheduler | enabled :-) 2014-04-17T14:17:46.000000 | | | |- nova-cert | enabled :-) 2014-04-17T14:17:46.000000 | | | |- nova-console | enabled XXX 2014-02-26T09:30:20.000000 | | Lyon | available | | |- hostnamedbj | | | | |- nova-compute | enabled :-) 2014-04-17T14:17:46.000000 | | |- hostnamedbu | | | | |- nova-compute | enabled :-) 2014-04-17T14:17:50.000000 | | Lille | available | | |- hostnamecup | | | | |- nova-compute | enabled :-) 2014-04-17T14:17:43.000000 | +-----------------------+----------------------------------------+
Pour exploiter ces AZ, nous allons démarrer une instance dans chaque AZ.
[root@hostnamedab ~(keystone_admin)]# nova --os-tenant-name admin boot --image cirros-3.2 --flavor 2 --nic net-id=00bcfcc4-236e-40bd-ba54-74c85ae0d05e --availability-zone Lyon inst001 +--------------------------------------+---------------------------------------------------+ | Property | Value | +--------------------------------------+---------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-SRV-ATTR:host | - | | OS-EXT-SRV-ATTR:hypervisor_hostname | - | | OS-EXT-SRV-ATTR:instance_name | instance-00000051 | | OS-EXT-STS:power_state | 0 | | OS-EXT-STS:task_state | scheduling | | OS-EXT-STS:vm_state | building | | OS-SRV-USG:launched_at | - | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | adminPass | LSD6AHn5CjJf | | config_drive | | | created | 2014-04-17T14:39:04Z | | flavor | m1.small (2) | | hostId | | | id | 6941371d-e2da-4b6c-961e-6be5c7b67f88 | | image | cirros-3.2 (38de0608-74fd-47c3-8839-e0d839711171) | | key_name | - | | metadata | {} | | name | inst001 | | os-extended-volumes:volumes_attached | [] | | progress | 0 | | security_groups | default | | status | BUILD | | tenant_id | 5f8ffb039ce844bc94ba031be85e0936 | | updated | 2014-04-17T14:39:05Z | | user_id | ab1435cbeb5d46829299525fc4b37c7d | +--------------------------------------+---------------------------------------------------+
Malgré une information OS-EXT-AZ:availability_zone:nova, l’instance est bien placé sur une de nos deux host de l’AZ Lyon.
[root@hostnamedab ~(keystone_admin)]# nova --os-tenant-name admin boot --image cirros-3.2 --flavor 2 --nic net-id=00bcfcc4-236e-40bd-ba54-74c85ae0d05e --availability-zone Lille inst003 ... [root@hostnamedab ~(keystone_admin)]# nova list --fields name,host +--------------------------------------+---------+-------------+ | ID | Name | Host | +--------------------------------------+---------+-------------+ | cd97e80e-4916-40d1-bc04-bc4ba206b2c9 | inst001 | hostnamedbu | | da064738-66b1-49c1-9066-580443fda554 | inst003 | hostnamecup | +--------------------------------------+---------+-------------+
Utilisation des “Aggregates”
Avant de commencer, nous allons ajouter la fonctionnalité dans les filters du scheduler.
[root@hostnamedab ~]# grep "^scheduler_default_filters" /etc/nova/nova.conf scheduler_default_filters=AggregateInstanceExtraSpecsFilter,RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ImagePropertiesFilter,CoreFilter,JsonFilter [root@hostnamedab ~]# /etc/init.d/openstack-nova-scheduler restart Stopping openstack-nova-scheduler: [ OK ] Starting openstack-nova-scheduler: [ OK ]
Nous allons créer un aggregat de host sur un critère totalement arbitraire, il pourrait être une config matérielle, un socle KVM particulier, dans notre cas ça sera “contient un u dans son nom”. On lui donnera la metadata u=true.
[root@hostnamedab ~(keystone_admin)]# nova aggregate-create contient-u +----+------------+-------------------+-------+----------+ | Id | Name | Availability Zone | Hosts | Metadata | +----+------------+-------------------+-------+----------+ | 4 | contient-u | - | | | +----+------------+-------------------+-------+----------+ [root@hostnamedab ~(keystone_admin)]# nova aggregate-add-host contient-u hostnamedbu Host hostnamedbu has been successfully added for aggregate 4 +----+------------+-------------------+---------------+----------+ | Id | Name | Availability Zone | Hosts | Metadata | +----+------------+-------------------+---------------+----------+ | 4 | contient-u | - | 'hostnamedbu' | | +----+------------+-------------------+---------------+----------+ [root@hostnamedab ~(keystone_admin)]# nova aggregate-add-host contient-u hostnamecup Host hostnamecup has been successfully added for aggregate 4 +----+------------+-------------------+------------------------------+----------+ | Id | Name | Availability Zone | Hosts | Metadata | +----+------------+-------------------+------------------------------+----------+ | 4 | contient-u | - | 'hostnamedbu', 'hostnamecup' | | +----+------------+-------------------+------------------------------+----------+ [root@hostnamedab ~(keystone_admin)]# nova aggregate-set-metadata contient-u u=true Metadata has been successfully updated for aggregate 4. +----+------------+-------------------+------------------------------+----------+ | Id | Name | Availability Zone | Hosts | Metadata | +----+------------+-------------------+------------------------------+----------+ | 4 | contient-u | - | 'hostnamedbu', 'hostnamecup' | 'u=true' | +----+------------+-------------------+------------------------------+----------+
On va maintenant créer un flavor qui correspond à cet aggregat.
[root@hostnamedab ~(keystone_admin)]# nova flavor-create m1.u auto 2048 20 1 --is-public true +--------------------------------------+------+-----------+------+-----------+------+-------+-------------+-----------+ | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public | +--------------------------------------+------+-----------+------+-----------+------+-------+-------------+-----------+ | ad174fd6-10f3-40aa-8554-fff612101df8 | m1.u | 2048 | 20 | 0 | | 1 | 1.0 | True | +--------------------------------------+------+-----------+------+-----------+------+-------+-------------+-----------+ [root@hostnamedab ~(keystone_admin)]# nova flavor-key m1.u set u=true [root@hostnamedab ~(keystone_admin)]# nova flavor-show m1.u +----------------------------+--------------------------------------+ | Property | Value | +----------------------------+--------------------------------------+ | OS-FLV-DISABLED:disabled | False | | OS-FLV-EXT-DATA:ephemeral | 0 | | disk | 20 | | extra_specs | {"u": "true"} | | id | ad174fd6-10f3-40aa-8554-fff612101df8 | | name | m1.u | | os-flavor-access:is_public | True | | ram | 2048 | | rxtx_factor | 1.0 | | swap | | | vcpus | 1 | +----------------------------+--------------------------------------+
Maintenant on peut créer des instances sur cet aggregat. Pour voir la répartition et laisser faire le scheduler, nous allons créer beaucoup d’instances.
[root@hostnamedab ~(keystone_admin)]# for i in $(seq 1 15); do nova --os-tenant-name admin boot --image cirros-3.2 --flavor m1.u --nic net-id=00bcfcc4-236e-40bd-ba54-74c85ae0d05e inst0$i; done ... [root@hostnamedab ~(keystone_admin)]# nova list --fields name,host +--------------------------------------+---------+-------------+ | ID | Name | Host | +--------------------------------------+---------+-------------+ | 74b4d0f0-a313-40a8-aeef-4f1697465941 | inst01 | hostnamecup | | 533681b5-c306-4dc3-a999-627aa6e68f68 | inst010 | hostnamecup | | 4facabf6-9d66-4c34-88fb-297fd94880d7 | inst011 | hostnamecup | | 82501c30-d247-45fe-9b18-0836b9b74c67 | inst012 | hostnamecup | | 59db7ee2-4289-41be-9aaf-8fe6a1e15726 | inst013 | hostnamecup | | a436b8b9-c6af-4197-bc87-02280cf19e84 | inst014 | None | | 6ec4cdbb-db70-4571-b493-7b51d35d1cda | inst015 | None | | 37d887ef-d8ba-4b6b-ba05-969e124dd60c | inst02 | hostnamecup | | 3beb3f09-6599-4f53-898c-13bdd9530f55 | inst03 | hostnamecup | | 79f31bc6-215a-4851-84aa-145f3594ff21 | inst04 | hostnamecup | | 5ef33631-79be-4d8b-afea-8551afc68fa3 | inst05 | hostnamecup | | cbe35aac-6b05-44ec-ae89-544587835916 | inst06 | hostnamecup | | 947ca7ed-1666-4780-ade9-5a1877aa3ebb | inst07 | hostnamedbu | | 6f148b9e-3b7a-4cf0-8bdf-4461e65d2881 | inst08 | hostnamecup | | 24fa7af6-1687-41cd-866f-144aae1b60e2 | inst09 | hostnamedbu | +--------------------------------------+---------+-------------+
On voit ici que la contrainte de rester dans l’aggregat “contient-u” a bien été respecté même avec un débordement de capacité puisque 2 instances ont été mises en erreur lorsque les ressources ne permettaient plus de démarrer de nouvelles instances.
Utilistaion de la “live migration”
Pour rappel, voici les AZ que nous avons.
[root@hostnamedab ~(keystone_admin)]# nova availability-zone-list +-----------------------+----------------------------------------+ | Name | Status | +-----------------------+----------------------------------------+ | internal | available | | |- hostnamedab | | | | |- nova-conductor | enabled :-) 2014-04-18T15:23:06.000000 | | | |- nova-consoleauth | enabled :-) 2014-04-18T15:23:05.000000 | | | |- nova-scheduler | enabled :-) 2014-04-18T15:23:06.000000 | | | |- nova-cert | enabled :-) 2014-04-18T15:23:14.000000 | | | |- nova-console | enabled XXX 2014-02-26T09:30:20.000000 | | Lyon | available | | |- hostnamedbj | | | | |- nova-compute | enabled :-) 2014-04-18T15:23:12.000000 | | |- hostnamedbu | | | | |- nova-compute | enabled :-) 2014-04-18T15:23:13.000000 | | Lille | available | | |- hostnamecup | | | | |- nova-compute | enabled :-) 2014-04-18T15:23:07.000000 | +-----------------------+----------------------------------------+
La live migration demande certains paramétrages spécifiques. Les détails sont dans la [http://docs.openstack.org/trunk/config-reference/content/section_configuring-compute-migrations.html documentation officielle].
[root@hostnamedab ~(keystone_admin)]# nova --os-tenant-name admin boot --image cirros-3.2 --flavor 2 --nic net-id=00bcfcc4-236e-40bd-ba54-74c85ae0d05e --availability-zone Lyon inst001 ... [root@hostnamedab ~(keystone_admin)]# nova show inst001 +--------------------------------------+----------------------------------------------------------+ | Property | Value | +--------------------------------------+----------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | Lyon | | OS-EXT-SRV-ATTR:host | hostnamedbj | | OS-EXT-SRV-ATTR:hypervisor_hostname | hostnamedbj.dsit.sncf.fr | | OS-EXT-SRV-ATTR:instance_name | instance-00000083 | | OS-EXT-STS:power_state | 1 | | OS-EXT-STS:task_state | - | | OS-EXT-STS:vm_state | active | | OS-SRV-USG:launched_at | 2014-04-23T13:13:09.000000 | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | config_drive | | | created | 2014-04-23T13:12:19Z | | flavor | m1.small (2) | | hostId | 67a93b4953c7cf7ac992a4c27f8551f70aa7e113df364523a225460f | | id | d18d7ff4-6bea-493f-a515-88e932f47757 | | image | cirros-3.2 (38de0608-74fd-47c3-8839-e0d839711171) | | key_name | - | | metadata | {} | | mynettenant network | 192.168.165.2 | | name | inst001 | | os-extended-volumes:volumes_attached | [] | | progress | 0 | | security_groups | default | | status | ACTIVE | | tenant_id | 5f8ffb039ce844bc94ba031be85e0936 | | updated | 2014-04-23T13:13:09Z | | user_id | ab1435cbeb5d46829299525fc4b37c7d | +--------------------------------------+----------------------------------------------------------+ [root@hostnamedab ~(keystone_admin)]# nova live-migration inst001 hostnamedbu [root@hostnamedab ~(keystone_admin)]# nova show inst001 +--------------------------------------+----------------------------------------------------------+ | Property | Value | +--------------------------------------+----------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | Lyon | | OS-EXT-SRV-ATTR:host | hostnamedbu | | OS-EXT-SRV-ATTR:hypervisor_hostname | hostnamedbu.dsit.sncf.fr | | OS-EXT-SRV-ATTR:instance_name | instance-00000083 | | OS-EXT-STS:power_state | 1 | | OS-EXT-STS:task_state | - | | OS-EXT-STS:vm_state | active | | OS-SRV-USG:launched_at | 2014-04-23T13:13:09.000000 | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | config_drive | | | created | 2014-04-23T13:12:19Z | | flavor | m1.small (2) | | hostId | ab7ffc296b9a364faf21b5b602d61d819da34a4713c81eca9741d5a6 | | id | d18d7ff4-6bea-493f-a515-88e932f47757 | | image | cirros-3.2 (38de0608-74fd-47c3-8839-e0d839711171) | | key_name | - | | metadata | {} | | mynettenant network | 192.168.165.2 | | name | inst001 | | os-extended-volumes:volumes_attached | [] | | progress | 0 | | security_groups | default | | status | ACTIVE | | tenant_id | 5f8ffb039ce844bc94ba031be85e0936 | | updated | 2014-04-23T13:15:07Z | | user_id | ab1435cbeb5d46829299525fc4b37c7d | +--------------------------------------+----------------------------------------------------------+
Leave a Reply