Extraire les lignes d’un fichier qui ne sont pas présentes dans un autre

Written by admin on May 13, 2014 Categories: Ligne de commande Tags: , , , ,
fgrep -x -f file1-v file2

autre solution

comm -23 file1 file2

Source :

http://stackoverflow.com/questions/14473090/find-lines-from-a-file-which-are-not-present-in-another-file

http://stackoverflow.com/questions/5812756/print-lines-from-one-file-that-are-not-contained-in-another-file

No Comments on Extraire les lignes d’un fichier qui ne sont pas présentes dans un autre

OpenStack : Exemple d’utilisation pour présenter Nova

Written by admin on April 23, 2014 Categories: OpenStack Tags: , , , , , , , ,

Nova est la stack de gestion des computes sur OpenStack. Son rôle est de communiquer avec différents hyperviseurs pour permettre d’exécuter les instances demandées par l’utilisateur. Pour plus de détails, voir directement le projet sur openstack.org.

Context de l’exemple

Dans le cadre d’un POC, nous allons essentiellement nous servir de machines virtuelles x86. Dans ce test en particuliers, nous ajouterons un compute physique pour permettre d’avoir des configurations différentes d’un compute à l’autre.

                                            +--------------------------+
                                          +--------------------------+ |
  +-------------------------+           +--------------------------+ | |
  |                         |           |                          | | |
  |                         |           |                          | | |
  |     controller01        |           |       compute01          | | |
  |                         |           |                          | | |
  |                         |           |                          | |-+
  |  eth0     eth2    eth3  |           |  eth3    eth2      eth0  |-+
  +-------------------------+           +--------------------------+
      |        |       |                    |       |         |
      |        |       |                    |       |         |
      |        |       |   Réseau "privé"   |       |         |
      |        |       +--------------------+       |         |
      |        |           Réseau "public"          |         |
      |        +------------------------------------+         |
      |                    Réseau "admin"                     |
      +-------------------------------------------------------+

Nous allons voir comment se comporte Nova lors de l’instentiation en fonction des paramètres passé de manière à approcher les concepts de zone et de politique d’allocation. Nous vérons ensuite le cas d’une live-migration d’un compute à l’autre.

Prérequis

Comme les méthodes de déploiement d’OpenStack varient beaucoup, nous n’allons pas détailler ici comment mettre en place le service. Nous partons du principe que le service est “up and running”. Nous avons donc un service qui fonctionne avec les stack suivantes : keystone, glance, nova, horizon, cinder et neutron. Nous avons également les droits d’administration du service chargés dans l’environnement de la ligne de commande.

[root@hostnamedab ~(keystone_admin)]# keystone service-list
+----------------------------------+------------+----------------+----------------------------+
|                id                |    name    |      type      |        description         |
+----------------------------------+------------+----------------+----------------------------+
| b0bee0b0e9f34f8bafd4ba7d54ba3d6e | ceilometer |    metering    | Openstack Metering Service |
| 2a06e498c2b84cb48ebd578f6fa48297 |   cinder   |     volume     |       Cinder Service       |
| 14fa9ec07e34443bba5daac33266671f | cinder_v2  |    volumev2    |     Cinder Service v2      |
| 1f4e441ee6d5489281d3aa8d64e2a746 |   glance   |     image      |  Openstack Image Service   |
| d189a66300e04e9b8ac8cacad3eca3a1 |    heat    | orchestration  |          Heat API          |
| f96774576d8846d7bdd04ec9ccefabb5 |  heat-cfn  | cloudformation |  Heat CloudFormation API   |
| 9365681a0e3945e2806e83d85b8319bf |  keystone  |    identity    | OpenStack Identity Service |
| f13396b4b11c45baa59f9de685f25020 |  neutron   |    network     | Neutron Networking Service |
| 6cf6626654b04b89a988483fb566508d |    nova    |    compute     | Openstack Compute Service  |
| f783eff435804e449d529ef6d03745bf |  nova_ec2  |      ec2       |        EC2 Service         |
+----------------------------------+------------+----------------+----------------------------+
[root@hostnamedab nova(keystone_admin)]# nova service-list
+------------------+-------------+----------+---------+-------+----------------------------+-----------------+
| Binary           | Host        | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+------------------+-------------+----------+---------+-------+----------------------------+-----------------+
| nova-consoleauth | hostnamedab | internal | enabled | up    | 2014-04-15T10:05:12.000000 | -               |
| nova-scheduler   | hostnamedab | internal | enabled | up    | 2014-04-15T10:05:10.000000 | -               |
| nova-conductor   | hostnamedab | internal | enabled | up    | 2014-04-15T10:05:15.000000 | -               |
| nova-cert        | hostnamedab | internal | enabled | up    | 2014-04-15T10:05:14.000000 | -               |
| nova-compute     | hostnamedbj | nova     | enabled | up    | 2014-04-15T10:05:14.000000 | -               |
| nova-console     | hostnamedab | internal | enabled | down  | 2014-02-26T09:30:20.000000 | -               |
| nova-compute     | hostnamecup | nova     | enabled | up    | 2014-04-15T10:05:13.000000 | -               |
| nova-compute     | hostnamedbu | nova     | enabled | up    | 2014-04-15T10:05:07.000000 | -               |
+------------------+-------------+----------+---------+-------+----------------------------+-----------------+

Composition de la stack Nova

Nova a une architecture modulaire :

  • nova-api : ce service porte l’API
  • nova-cert : ce service a en charge la gestion des certificats
  • nova-conductor : c’est une couche d’abstration à la base de données pour les processus nova-compute
  • nova-console : ce service gère les accès consoles
  • nova-consoleauth : il gère l’authentification pour les accès aux consoles
  • nova-metadata-api : ce service fournit les informations utilisées par les instances lors de l’exécution de cloud-init par exemple
  • nova-novncproxy : gestion de connexion VNC à travers une websocket HTML5
  • nova-scheduler : service de prise de décision pour la répartition des instances au démarrage
  • nova-spicehtml5proxy :
  • nova-xvpvncproxy : autre proxy VNC basé sur Java
  • nova-compute : ce service est le plus important, il fait la liaison entre des instructions nova et celles de l’hyperviseur

En fonction du rôle des noeuds, certaines parties devront être présentes ou non suivant les cas.

Configuration

Ce n’est pas le lieu ici pour discuter de la configuration, mais voici tout de même un extrait des éléments les plus importants pour permettre de resituer le contexte.

[root@hostnamedab ~]# cat /etc/nova/nova.conf | grep -v "^#" |grep -v "^$"
[DEFAULT]
state_path=/var/lib/nova
enabled_apis=ec2,osapi_compute,metadata
ec2_listen=0.0.0.0
osapi_compute_listen=0.0.0.0
osapi_compute_workers=2
metadata_listen=0.0.0.0
service_down_time=60
rootwrap_config=/etc/nova/rootwrap.conf
auth_strategy=keystone
use_forwarded_for=False
service_neutron_metadata_proxy=True
neutron_metadata_proxy_shared_secret=patapouf
neutron_default_tenant_id=default
novncproxy_host=0.0.0.0
novncproxy_port=6080
glance_api_servers=192.168.41.129:9292
network_api_class=nova.network.neutronv2.api.API
metadata_host=192.168.41.129
neutron_url=http://192.168.41.129:9696
neutron_url_timeout=30
neutron_admin_username=neutron
neutron_admin_password=patapouf
neutron_admin_tenant_name=services
neutron_region_name=RegionOne
neutron_admin_auth_url=http://192.168.41.129:35357/v2.0
neutron_auth_strategy=keystone
neutron_ovs_bridge=br-int
neutron_extension_sync_interval=600
security_group_api=neutron
lock_path=/var/lib/nova/tmp
debug=True
verbose=True
use_syslog=False
rpc_backend=nova.openstack.common.rpc.impl_qpid
qpid_hostname=192.168.41.129
qpid_port=5672
qpid_username=guest
qpid_password=guest
qpid_heartbeat=60
qpid_protocol=tcp
qpid_tcp_nodelay=True
cpu_allocation_ratio=16.0
ram_allocation_ratio=1.5
scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,CoreFilter
firewall_driver=nova.virt.firewall.NoopFirewallDriver
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
libvirt_use_virtio_for_bridges=True
vnc_keymap=en-us
volume_api_class=nova.volume.cinder.API
qpid_reconnect_interval=0
qpid_reconnect_interval_min=0
qpid_reconnect=True
sql_connection=mysql://nova:patapouf@192.168.41.129/nova
qpid_reconnect_timeout=0
image_service=nova.image.glance.GlanceImageService
logdir=/var/log/nova
qpid_reconnect_interval_max=0
qpid_reconnect_limit=0
osapi_volume_listen=0.0.0.0
[hyperv]
[zookeeper]
[osapi_v3]
[conductor]
[keymgr]
[cells]
[database]
[image_file_url]
[baremetal]
[rpc_notifier2]
[matchmaker_redis]
[ssl]
[trusted_computing]
[upgrade_levels]
[matchmaker_ring]
[vmware]
[spice]
[keystone_authtoken]
admin_tenant_name=services
admin_user=nova
admin_password=patapouf
auth_host=192.168.41.129
auth_port=35357
auth_protocol=http
auth_uri=http://192.168.41.129:5000/

Utilisation des “Filters”

Pour rappel, les filtres activé sont :

[root@hostnamedab nova(keystone_admin)]# grep "^scheduler_default_filters" /etc/nova/nova.conf
scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,CoreFilter

Dans un premier temps nous allons créer 4 instances sans spécifier de filtre. Nova-scheduler va donc appliquer les filtres qui lui sont connus puis appliquer les poids.

[root@hostnamedab nova(keystone_admin)]# nova --os-tenant-name admin boot --image cirros-3.2 --flavor 2 --nic net-id=00bcfcc4-236e-40bd-ba54-74c85ae0d05e inst001
+--------------------------------------+---------------------------------------------------+
| Property                             | Value                                             |
+--------------------------------------+---------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                            |
| OS-EXT-AZ:availability_zone          | nova                                              |
| OS-EXT-SRV-ATTR:host                 | -                                                 |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | -                                                 |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000049                                 |
| OS-EXT-STS:power_state               | 0                                                 |
| OS-EXT-STS:task_state                | scheduling                                        |
| OS-EXT-STS:vm_state                  | building                                          |
| OS-SRV-USG:launched_at               | -                                                 |
| OS-SRV-USG:terminated_at             | -                                                 |
| accessIPv4                           |                                                   |
| accessIPv6                           |                                                   |
| adminPass                            | Hc3tpFHWQHJB                                      |
| config_drive                         |                                                   |
| created                              | 2014-04-16T14:53:17Z                              |
| flavor                               | m1.small (2)                                      |
| hostId                               |                                                   |
| id                                   | f0b85d27-240c-4bfd-a748-8930d693595b              |
| image                                | cirros-3.2 (38de0608-74fd-47c3-8839-e0d839711171) |
| key_name                             | -                                                 |
| metadata                             | {}                                                |
| name                                 | inst001                                           |
| os-extended-volumes:volumes_attached | []                                                |
| progress                             | 0                                                 |
| security_groups                      | default                                           |
| status                               | BUILD                                             |
| tenant_id                            | 5f8ffb039ce844bc94ba031be85e0936                  |
| updated                              | 2014-04-16T14:53:18Z                              |
| user_id                              | ab1435cbeb5d46829299525fc4b37c7d                  |
+--------------------------------------+---------------------------------------------------+
[root@hostnamedab nova(keystone_admin)]# nova --os-tenant-name admin boot --image cirros-3.2 --flavor 2 --nic net-id=00bcfcc4-236e-40bd-ba54-74c85ae0d05e inst002
...
[root@hostnamedab nova(keystone_admin)]# nova --os-tenant-name admin boot --image cirros-3.2 --flavor 2 --nic net-id=00bcfcc4-236e-40bd-ba54-74c85ae0d05e inst003
...
[root@hostnamedab nova(keystone_admin)]# nova --os-tenant-name admin boot --image cirros-3.2 --flavor 2 --nic net-id=00bcfcc4-236e-40bd-ba54-74c85ae0d05e inst004
...
[root@hostnamedab nova(keystone_admin)]# nova list --fields name,host
+--------------------------------------+---------+-------------+
| ID                                   | Name    | Host        |
+--------------------------------------+---------+-------------+
| f0b85d27-240c-4bfd-a748-8930d693595b | inst001 | hostnamecup |
| 72b9ae91-bbd8-4c73-9f05-a7f2a169adea | inst002 | hostnamecup |
| d7fe44bb-c2b0-48a0-8498-2a328528e7b2 | inst003 | hostnamecup |
| 7b632fb6-bc96-42fb-9b16-355348ae66a4 | inst004 | hostnamecup |
+--------------------------------------+---------+-------------+

Nous avons donc bien 4 instances sur le même hyperviseur qui est le plus “costaud”. Avant de poursuivre, on fait le ménage et on supprime toutes les instances.

Nous allons changer la gestion des filtre et spécifier quelques contraintes. La liste des filtres disponibles se trouve ici et la documentation  ici.
Ajoutons le filtre ”’JsonFilter”’.

[root@hostnamedab nova(keystone_admin)]# grep "^scheduler_default_filters" /etc/nova/nova.conf
scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,CoreFilter,JsonFilter
[root@hostnamedab nova(keystone_admin)]# /etc/init.d/openstack-nova-scheduler restart
Stopping openstack-nova-scheduler:                         [  OK  ]
Starting openstack-nova-scheduler:                         [  OK  ]

On renouvelle l’opération de création de 4 instances mais en appliquant la contrainte suivante : sélectionne les hosts avec 0 vpus utilisés.

[root@hostnamedab nova(keystone_admin)]# nova --os-tenant-name admin boot --image cirros-3.2 --flavor 2 --nic net-id=00bcfcc4-236e-40bd-ba54-74c85ae0d05e --hint query='["=","$vcpus_used",0]' inst001
...
[root@hostnamedab nova(keystone_admin)]# nova --os-tenant-name admin boot --image cirros-3.2 --flavor 2 --nic net-id=00bcfcc4-236e-40bd-ba54-74c85ae0d05e --hint query='["=","$vcpus_used",0]' inst002
...
[root@hostnamedab nova(keystone_admin)]# nova --os-tenant-name admin boot --image cirros-3.2 --flavor 2 --nic net-id=00bcfcc4-236e-40bd-ba54-74c85ae0d05e --hint query='["=","$vcpus_used",0]' inst003
...
[root@hostnamedab nova(keystone_admin)]# nova --os-tenant-name admin boot --image cirros-3.2 --flavor 2 --nic net-id=00bcfcc4-236e-40bd-ba54-74c85ae0d05e --hint query='["=","$vcpus_used",0]' inst004
...
[root@hostnamedab nova(keystone_admin)]# nova list --fields name,host
+--------------------------------------+---------+-------------+
| ID                                   | Name    | Host        |
+--------------------------------------+---------+-------------+
| 87b3d4af-d116-4090-8a51-09b83adf57ec | inst001 | hostnamecup |
| 0559bbbd-ed5e-40d8-95cb-bd91012bd90d | inst002 | hostnamedbu |
| 7183cacf-53f6-4d05-8a77-acc6293b5dc8 | inst003 | hostnamedbj |
| 9cffcce7-9a10-48ea-8613-68bc984a31f5 | inst004 | None        |
+--------------------------------------+---------+-------------+
[root@hostnamedab nova(keystone_admin)]# nova show inst004
+--------------------------------------+------------------------------------------------------------------------------------------+
| Property                             | Value                                                                                    |
+--------------------------------------+------------------------------------------------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                                                                   |
| OS-EXT-AZ:availability_zone          | nova                                                                                     |
| OS-EXT-SRV-ATTR:host                 | -                                                                                        |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | -                                                                                        |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000050                                                                        |
| OS-EXT-STS:power_state               | 0                                                                                        |
| OS-EXT-STS:task_state                | -                                                                                        |
| OS-EXT-STS:vm_state                  | error                                                                                    |
| OS-SRV-USG:launched_at               | -                                                                                        |
| OS-SRV-USG:terminated_at             | -                                                                                        |
| accessIPv4                           |                                                                                          |
| accessIPv6                           |                                                                                          |
| config_drive                         |                                                                                          |
| created                              | 2014-04-16T15:14:54Z                                                                     |
| fault                                | {"message": "No valid host was found. ", "code": 500, "created": "2014-04-16T15:14:55Z"} |
| flavor                               | m1.small (2)                                                                             |
| hostId                               |                                                                                          |
| id                                   | 9cffcce7-9a10-48ea-8613-68bc984a31f5                                                     |
| image                                | cirros-3.2 (38de0608-74fd-47c3-8839-e0d839711171)                                        |
| key_name                             | -                                                                                        |
| metadata                             | {}                                                                                       |
| name                                 | inst004                                                                                  |
| os-extended-volumes:volumes_attached | []                                                                                       |
| status                               | ERROR                                                                                    |
| tenant_id                            | 5f8ffb039ce844bc94ba031be85e0936                                                         |
| updated                              | 2014-04-16T15:14:55Z                                                                     |
| user_id                              | ab1435cbeb5d46829299525fc4b37c7d                                                         |
+--------------------------------------+------------------------------------------------------------------------------------------+

Nous avons bien 3 instance sur 3 hosts différents mais la 4e instance n’a pas pu être démarrée pour cause de ”No valid host was found” ce qui correspond bien à notre contrainte.

Utilisation des “Availability Zones”

Remarque préliminaire : Depuis Grizzly, une AZ (availability zone) est un agregat de host particulier dont la zone metadata contient availability_zone=maZone.

Commençons par créer deux AZ.

[root@hostnamedab ~(keystone_admin)]# nova aggregate-create hypervisor-ph Lyon
+----+---------------+-------------------+-------+--------------------------+
| Id | Name          | Availability Zone | Hosts | Metadata                 |
+----+---------------+-------------------+-------+--------------------------+
| 2  | hypervisor-ph | Lyon              |       | 'availability_zone=Lyon' |
+----+---------------+-------------------+-------+--------------------------+
[root@hostnamedab ~(keystone_admin)]# nova aggregate-create hypervisor-ph Lille
+----+---------------+-------------------+-------+---------------------------+
| Id | Name          | Availability Zone | Hosts | Metadata                  |
+----+---------------+-------------------+-------+---------------------------+
| 3  | hypervisor-ph | Lille             |       | 'availability_zone=Lille' |
+----+---------------+-------------------+-------+---------------------------+

Nous allons maintenant ajouter les hosts dans les AZ créées.

[root@hostnamedab ~(keystone_admin)]# nova aggregate-add-host hypervisor-ph hostnamecup
Host hostnamecup has been successfully added for aggregate 3
+----+---------------+-------------------+---------------+---------------------------+
| Id | Name          | Availability Zone | Hosts         | Metadata                  |
+----+---------------+-------------------+---------------+---------------------------+
| 3  | hypervisor-ph | Lille             | 'hostnamecup' | 'availability_zone=Lille' |
+----+---------------+-------------------+---------------+---------------------------+
[root@hostnamedab ~(keystone_admin)]# nova aggregate-add-host hypervisor-vm hostnamedbj
Host hostnamedbj has been successfully added for aggregate 1
+----+---------------+-------------------+---------------+--------------------------+
| Id | Name          | Availability Zone | Hosts         | Metadata                 |
+----+---------------+-------------------+---------------+--------------------------+
| 1  | hypervisor-vm | Lyon              | 'hostnamedbj' | 'availability_zone=Lyon' |
+----+---------------+-------------------+---------------+--------------------------+
[root@hostnamedab ~(keystone_admin)]# nova aggregate-add-host hypervisor-vm hostnamedbu
Host hostnamedbu has been successfully added for aggregate 1
+----+---------------+-------------------+------------------------------+--------------------------+
| Id | Name          | Availability Zone | Hosts                        | Metadata                 |
+----+---------------+-------------------+------------------------------+--------------------------+
| 1  | hypervisor-vm | Lyon              | 'hostnamedbj', 'hostnamedbu' | 'availability_zone=Lyon' |
+----+---------------+-------------------+------------------------------+--------------------------+
[root@hostnamedab ~(keystone_admin)]# nova availability-zone-list
+-----------------------+----------------------------------------+
| Name                  | Status                                 |
+-----------------------+----------------------------------------+
| internal              | available                              |
| |- hostnamedab        |                                        |
| | |- nova-conductor   | enabled :-) 2014-04-17T14:17:46.000000 |
| | |- nova-consoleauth | enabled :-) 2014-04-17T14:17:46.000000 |
| | |- nova-scheduler   | enabled :-) 2014-04-17T14:17:46.000000 |
| | |- nova-cert        | enabled :-) 2014-04-17T14:17:46.000000 |
| | |- nova-console     | enabled XXX 2014-02-26T09:30:20.000000 |
| Lyon                  | available                              |
| |- hostnamedbj        |                                        |
| | |- nova-compute     | enabled :-) 2014-04-17T14:17:46.000000 |
| |- hostnamedbu        |                                        |
| | |- nova-compute     | enabled :-) 2014-04-17T14:17:50.000000 |
| Lille                 | available                              |
| |- hostnamecup        |                                        |
| | |- nova-compute     | enabled :-) 2014-04-17T14:17:43.000000 |
+-----------------------+----------------------------------------+

Pour exploiter ces AZ, nous allons démarrer une instance dans chaque AZ.

[root@hostnamedab ~(keystone_admin)]# nova --os-tenant-name admin boot --image cirros-3.2 --flavor 2 --nic net-id=00bcfcc4-236e-40bd-ba54-74c85ae0d05e --availability-zone Lyon inst001
+--------------------------------------+---------------------------------------------------+
| Property                             | Value                                             |
+--------------------------------------+---------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                            |
| OS-EXT-AZ:availability_zone          | nova                                              |
| OS-EXT-SRV-ATTR:host                 | -                                                 |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | -                                                 |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000051                                 |
| OS-EXT-STS:power_state               | 0                                                 |
| OS-EXT-STS:task_state                | scheduling                                        |
| OS-EXT-STS:vm_state                  | building                                          |
| OS-SRV-USG:launched_at               | -                                                 |
| OS-SRV-USG:terminated_at             | -                                                 |
| accessIPv4                           |                                                   |
| accessIPv6                           |                                                   |
| adminPass                            | LSD6AHn5CjJf                                      |
| config_drive                         |                                                   |
| created                              | 2014-04-17T14:39:04Z                              |
| flavor                               | m1.small (2)                                      |
| hostId                               |                                                   |
| id                                   | 6941371d-e2da-4b6c-961e-6be5c7b67f88              |
| image                                | cirros-3.2 (38de0608-74fd-47c3-8839-e0d839711171) |
| key_name                             | -                                                 |
| metadata                             | {}                                                |
| name                                 | inst001                                           |
| os-extended-volumes:volumes_attached | []                                                |
| progress                             | 0                                                 |
| security_groups                      | default                                           |
| status                               | BUILD                                             |
| tenant_id                            | 5f8ffb039ce844bc94ba031be85e0936                  |
| updated                              | 2014-04-17T14:39:05Z                              |
| user_id                              | ab1435cbeb5d46829299525fc4b37c7d                  |
+--------------------------------------+---------------------------------------------------+

Malgré une information OS-EXT-AZ:availability_zone:nova, l’instance est bien placé sur une de nos deux host de l’AZ Lyon.

[root@hostnamedab ~(keystone_admin)]# nova --os-tenant-name admin boot --image cirros-3.2 --flavor 2 --nic net-id=00bcfcc4-236e-40bd-ba54-74c85ae0d05e --availability-zone Lille inst003
...
[root@hostnamedab ~(keystone_admin)]# nova list --fields name,host
+--------------------------------------+---------+-------------+
| ID                                   | Name    | Host        |
+--------------------------------------+---------+-------------+
| cd97e80e-4916-40d1-bc04-bc4ba206b2c9 | inst001 | hostnamedbu |
| da064738-66b1-49c1-9066-580443fda554 | inst003 | hostnamecup |
+--------------------------------------+---------+-------------+

Utilisation des “Aggregates”

Avant de commencer, nous allons ajouter la fonctionnalité dans les filters du scheduler.

[root@hostnamedab ~]# grep "^scheduler_default_filters" /etc/nova/nova.conf
scheduler_default_filters=AggregateInstanceExtraSpecsFilter,RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ImagePropertiesFilter,CoreFilter,JsonFilter
[root@hostnamedab ~]# /etc/init.d/openstack-nova-scheduler restart
Stopping openstack-nova-scheduler:                         [  OK  ]
Starting openstack-nova-scheduler:                         [  OK  ]

Nous allons créer un aggregat de host sur un critère totalement arbitraire, il pourrait être une config matérielle, un socle KVM particulier, dans notre cas ça sera “contient un u dans son nom”. On lui donnera la metadata u=true.

[root@hostnamedab ~(keystone_admin)]# nova aggregate-create contient-u
+----+------------+-------------------+-------+----------+
| Id | Name       | Availability Zone | Hosts | Metadata |
+----+------------+-------------------+-------+----------+
| 4  | contient-u | -                 |       |          |
+----+------------+-------------------+-------+----------+
[root@hostnamedab ~(keystone_admin)]# nova aggregate-add-host contient-u hostnamedbu
Host hostnamedbu has been successfully added for aggregate 4
+----+------------+-------------------+---------------+----------+
| Id | Name       | Availability Zone | Hosts         | Metadata |
+----+------------+-------------------+---------------+----------+
| 4  | contient-u | -                 | 'hostnamedbu' |          |
+----+------------+-------------------+---------------+----------+
[root@hostnamedab ~(keystone_admin)]# nova aggregate-add-host contient-u hostnamecup
Host hostnamecup has been successfully added for aggregate 4
+----+------------+-------------------+------------------------------+----------+
| Id | Name       | Availability Zone | Hosts                        | Metadata |
+----+------------+-------------------+------------------------------+----------+
| 4  | contient-u | -                 | 'hostnamedbu', 'hostnamecup' |          |
+----+------------+-------------------+------------------------------+----------+
[root@hostnamedab ~(keystone_admin)]# nova aggregate-set-metadata contient-u u=true
Metadata has been successfully updated for aggregate 4.
+----+------------+-------------------+------------------------------+----------+
| Id | Name       | Availability Zone | Hosts                        | Metadata |
+----+------------+-------------------+------------------------------+----------+
| 4  | contient-u | -                 | 'hostnamedbu', 'hostnamecup' | 'u=true' |
+----+------------+-------------------+------------------------------+----------+

On va maintenant créer un flavor qui correspond à cet aggregat.

[root@hostnamedab ~(keystone_admin)]# nova flavor-create m1.u auto 2048 20 1 --is-public true
+--------------------------------------+------+-----------+------+-----------+------+-------+-------------+-----------+
| ID                                   | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+--------------------------------------+------+-----------+------+-----------+------+-------+-------------+-----------+
| ad174fd6-10f3-40aa-8554-fff612101df8 | m1.u | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
+--------------------------------------+------+-----------+------+-----------+------+-------+-------------+-----------+
[root@hostnamedab ~(keystone_admin)]# nova flavor-key m1.u set u=true
[root@hostnamedab ~(keystone_admin)]# nova flavor-show m1.u
+----------------------------+--------------------------------------+
| Property                   | Value                                |
+----------------------------+--------------------------------------+
| OS-FLV-DISABLED:disabled   | False                                |
| OS-FLV-EXT-DATA:ephemeral  | 0                                    |
| disk                       | 20                                   |
| extra_specs                | {"u": "true"}                        |
| id                         | ad174fd6-10f3-40aa-8554-fff612101df8 |
| name                       | m1.u                                 |
| os-flavor-access:is_public | True                                 |
| ram                        | 2048                                 |
| rxtx_factor                | 1.0                                  |
| swap                       |                                      |
| vcpus                      | 1                                    |
+----------------------------+--------------------------------------+

Maintenant on peut créer des instances sur cet aggregat. Pour voir la répartition et laisser faire le scheduler, nous allons créer beaucoup d’instances.

[root@hostnamedab ~(keystone_admin)]# for i in $(seq 1 15); do nova --os-tenant-name admin boot --image cirros-3.2 --flavor m1.u --nic net-id=00bcfcc4-236e-40bd-ba54-74c85ae0d05e inst0$i; done
...
[root@hostnamedab ~(keystone_admin)]# nova list --fields name,host
+--------------------------------------+---------+-------------+
| ID                                   | Name    | Host        |
+--------------------------------------+---------+-------------+
| 74b4d0f0-a313-40a8-aeef-4f1697465941 | inst01  | hostnamecup |
| 533681b5-c306-4dc3-a999-627aa6e68f68 | inst010 | hostnamecup |
| 4facabf6-9d66-4c34-88fb-297fd94880d7 | inst011 | hostnamecup |
| 82501c30-d247-45fe-9b18-0836b9b74c67 | inst012 | hostnamecup |
| 59db7ee2-4289-41be-9aaf-8fe6a1e15726 | inst013 | hostnamecup |
| a436b8b9-c6af-4197-bc87-02280cf19e84 | inst014 | None        |
| 6ec4cdbb-db70-4571-b493-7b51d35d1cda | inst015 | None        |
| 37d887ef-d8ba-4b6b-ba05-969e124dd60c | inst02  | hostnamecup |
| 3beb3f09-6599-4f53-898c-13bdd9530f55 | inst03  | hostnamecup |
| 79f31bc6-215a-4851-84aa-145f3594ff21 | inst04  | hostnamecup |
| 5ef33631-79be-4d8b-afea-8551afc68fa3 | inst05  | hostnamecup |
| cbe35aac-6b05-44ec-ae89-544587835916 | inst06  | hostnamecup |
| 947ca7ed-1666-4780-ade9-5a1877aa3ebb | inst07  | hostnamedbu |
| 6f148b9e-3b7a-4cf0-8bdf-4461e65d2881 | inst08  | hostnamecup |
| 24fa7af6-1687-41cd-866f-144aae1b60e2 | inst09  | hostnamedbu |
+--------------------------------------+---------+-------------+

On voit ici que la contrainte de rester dans l’aggregat “contient-u” a bien été respecté même avec un débordement de capacité puisque 2 instances ont été mises en erreur lorsque les ressources ne permettaient plus de démarrer de nouvelles instances.

Utilistaion de la “live migration”

Pour rappel, voici les AZ que nous avons.

[root@hostnamedab ~(keystone_admin)]# nova availability-zone-list
+-----------------------+----------------------------------------+
| Name                  | Status                                 |
+-----------------------+----------------------------------------+
| internal              | available                              |
| |- hostnamedab        |                                        |
| | |- nova-conductor   | enabled :-) 2014-04-18T15:23:06.000000 |
| | |- nova-consoleauth | enabled :-) 2014-04-18T15:23:05.000000 |
| | |- nova-scheduler   | enabled :-) 2014-04-18T15:23:06.000000 |
| | |- nova-cert        | enabled :-) 2014-04-18T15:23:14.000000 |
| | |- nova-console     | enabled XXX 2014-02-26T09:30:20.000000 |
| Lyon                  | available                              |
| |- hostnamedbj        |                                        |
| | |- nova-compute     | enabled :-) 2014-04-18T15:23:12.000000 |
| |- hostnamedbu        |                                        |
| | |- nova-compute     | enabled :-) 2014-04-18T15:23:13.000000 |
| Lille                 | available                              |
| |- hostnamecup        |                                        |
| | |- nova-compute     | enabled :-) 2014-04-18T15:23:07.000000 |
+-----------------------+----------------------------------------+

La live migration demande certains paramétrages spécifiques. Les détails sont dans la [http://docs.openstack.org/trunk/config-reference/content/section_configuring-compute-migrations.html documentation officielle].

[root@hostnamedab ~(keystone_admin)]# nova --os-tenant-name admin boot --image cirros-3.2 --flavor 2 --nic net-id=00bcfcc4-236e-40bd-ba54-74c85ae0d05e --availability-zone Lyon inst001
...
[root@hostnamedab ~(keystone_admin)]# nova show inst001
+--------------------------------------+----------------------------------------------------------+
| Property                             | Value                                                    |
+--------------------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                                   |
| OS-EXT-AZ:availability_zone          | Lyon                                                     |
| OS-EXT-SRV-ATTR:host                 | hostnamedbj                                              |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | hostnamedbj.dsit.sncf.fr                                 |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000083                                        |
| OS-EXT-STS:power_state               | 1                                                        |
| OS-EXT-STS:task_state                | -                                                        |
| OS-EXT-STS:vm_state                  | active                                                   |
| OS-SRV-USG:launched_at               | 2014-04-23T13:13:09.000000                               |
| OS-SRV-USG:terminated_at             | -                                                        |
| accessIPv4                           |                                                          |
| accessIPv6                           |                                                          |
| config_drive                         |                                                          |
| created                              | 2014-04-23T13:12:19Z                                     |
| flavor                               | m1.small (2)                                             |
| hostId                               | 67a93b4953c7cf7ac992a4c27f8551f70aa7e113df364523a225460f |
| id                                   | d18d7ff4-6bea-493f-a515-88e932f47757                     |
| image                                | cirros-3.2 (38de0608-74fd-47c3-8839-e0d839711171)        |
| key_name                             | -                                                        |
| metadata                             | {}                                                       |
| mynettenant network                  | 192.168.165.2                                            |
| name                                 | inst001                                                  |
| os-extended-volumes:volumes_attached | []                                                       |
| progress                             | 0                                                        |
| security_groups                      | default                                                  |
| status                               | ACTIVE                                                   |
| tenant_id                            | 5f8ffb039ce844bc94ba031be85e0936                         |
| updated                              | 2014-04-23T13:13:09Z                                     |
| user_id                              | ab1435cbeb5d46829299525fc4b37c7d                         |
+--------------------------------------+----------------------------------------------------------+
[root@hostnamedab ~(keystone_admin)]# nova live-migration inst001 hostnamedbu
[root@hostnamedab ~(keystone_admin)]# nova show inst001
+--------------------------------------+----------------------------------------------------------+
| Property                             | Value                                                    |
+--------------------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                                   |
| OS-EXT-AZ:availability_zone          | Lyon                                                     |
| OS-EXT-SRV-ATTR:host                 | hostnamedbu                                              |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | hostnamedbu.dsit.sncf.fr                                 |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000083                                        |
| OS-EXT-STS:power_state               | 1                                                        |
| OS-EXT-STS:task_state                | -                                                        |
| OS-EXT-STS:vm_state                  | active                                                   |
| OS-SRV-USG:launched_at               | 2014-04-23T13:13:09.000000                               |
| OS-SRV-USG:terminated_at             | -                                                        |
| accessIPv4                           |                                                          |
| accessIPv6                           |                                                          |
| config_drive                         |                                                          |
| created                              | 2014-04-23T13:12:19Z                                     |
| flavor                               | m1.small (2)                                             |
| hostId                               | ab7ffc296b9a364faf21b5b602d61d819da34a4713c81eca9741d5a6 |
| id                                   | d18d7ff4-6bea-493f-a515-88e932f47757                     |
| image                                | cirros-3.2 (38de0608-74fd-47c3-8839-e0d839711171)        |
| key_name                             | -                                                        |
| metadata                             | {}                                                       |
| mynettenant network                  | 192.168.165.2                                            |
| name                                 | inst001                                                  |
| os-extended-volumes:volumes_attached | []                                                       |
| progress                             | 0                                                        |
| security_groups                      | default                                                  |
| status                               | ACTIVE                                                   |
| tenant_id                            | 5f8ffb039ce844bc94ba031be85e0936                         |
| updated                              | 2014-04-23T13:15:07Z                                     |
| user_id                              | ab1435cbeb5d46829299525fc4b37c7d                         |
+--------------------------------------+----------------------------------------------------------+
No Comments on OpenStack : Exemple d’utilisation pour présenter Nova

OpenStack : Exemple d’utilisation pour présenter Heat

Written by admin on  Categories: OpenStack Tags: , , , , ,

Heat est la stack d’orchestration OpenStack. C’est l’outil qui permet d’automatiser le démarrage d’un ensemble d’éléments dans le projet open source d’IaaS.
Son rôle est donc de communiquer avec différentes briques pour créer des réseaux, des volumes, des instances, les paramétrer et les démarrer. Pour plus de détails, voir directement le projet sur openstack.org.

Context de l’exemple

Dans le cadre d’un POC, nous allons essentiellement nous servir de machines virtuelles x86.

  +-------------------------+           +--------------------------+
  |                         |           |                          |
  |                         |           |                          |
  |     controller01        |           |       compute01          |
  |                         |           |                          |
  |                         |           |                          |
  |  eth0     eth2    eth3  |           |  eth3    eth2      eth0  |
  +-------------------------+           +--------------------------+
      |        |       |                    |       |         |
      |        |       |                    |       |         |
      |        |       |   Réseau "privé"   |       |         |
      |        |       +--------------------+       |         |
      |        |           Réseau "public"          |         |
      |        +------------------------------------+         |
      |                    Réseau "admin"                     |
      +-------------------------------------------------------+

Nous allons voir des exemples très simples afin de voir les principes de base de l’orchestration.

Pour faciliter le déroulement, les opérations seront effectuées avec le compte admin même si certaines pourraient être faite en tant que simple user du tenant “admin”.

Prérequis

Comme les méthodes de déploiement d’OpenStack varient beaucoup, nous n’allons pas détailler ici comment mettre en place le service. Nous partons du principe que le service est “up and running”. Nous avons donc un service qui fonctionne avec les stack suivantes : keystone, glance, nova, horizon, neutron, et bien sur heat.
Nous avons également les droits d’administration du service chargés dans l’environnement de la ligne de commande.

[root@hostnamedab ~(keystone_admin)]# keystone service-list
+----------------------------------+------------+----------------+----------------------------+
|                id                |    name    |      type      |        description         |
+----------------------------------+------------+----------------+----------------------------+
| b0bee0b0e9f34f8bafd4ba7d54ba3d6e | ceilometer |    metering    | Openstack Metering Service |
| 2a06e498c2b84cb48ebd578f6fa48297 |   cinder   |     volume     |       Cinder Service       |
| 14fa9ec07e34443bba5daac33266671f | cinder_v2  |    volumev2    |     Cinder Service v2      |
| 1f4e441ee6d5489281d3aa8d64e2a746 |   glance   |     image      |  Openstack Image Service   |
| d189a66300e04e9b8ac8cacad3eca3a1 |    heat    | orchestration  |          Heat API          |
| f96774576d8846d7bdd04ec9ccefabb5 |  heat-cfn  | cloudformation |  Heat CloudFormation API   |
| 9365681a0e3945e2806e83d85b8319bf |  keystone  |    identity    | OpenStack Identity Service |
| f13396b4b11c45baa59f9de685f25020 |  neutron   |    network     | Neutron Networking Service |
| 6cf6626654b04b89a988483fb566508d |    nova    |    compute     | Openstack Compute Service  |
| f783eff435804e449d529ef6d03745bf |  nova_ec2  |      ec2       |        EC2 Service         |
+----------------------------------+------------+----------------+----------------------------+
[root@hostnamedab ~(keystone_admin)]# nova service-list
+------------------+-------------+----------+---------+-------+----------------------------+-----------------+
| Binary           | Host        | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+------------------+-------------+----------+---------+-------+----------------------------+-----------------+
| nova-consoleauth | hostnamedab | internal | enabled | up    | 2014-02-26T14:29:25.000000 | None            |
| nova-scheduler   | hostnamedab | internal | enabled | up    | 2014-02-26T14:29:25.000000 | None            |
| nova-conductor   | hostnamedab | internal | enabled | up    | 2014-02-26T14:29:24.000000 | None            |
| nova-cert        | hostnamedab | internal | enabled | up    | 2014-02-26T14:29:25.000000 | None            |
| nova-compute     | hostnamedbj | nova     | enabled | up    | 2014-02-26T14:29:28.000000 | None            |
| nova-console     | hostnamedab | internal | enabled | down  | 2014-02-26T09:30:20.000000 | None            |
+------------------+-------------+----------+---------+-------+----------------------------+-----------------+

Composition de la stack Heat

La stack heat n’est composée que du service d’API et du moteur.

Configuration

Ce n’est pas le lieu ici pour discuter de la configuration, mais voici tout de même un extrait des éléments les plus importants pour permettre de resituer le contexte.

[root@hostnamedab ~]# cat /etc/heat/heat.conf | grep -v "^#" |grep -v "^$"
[DEFAULT]
sql_connection=mysql://heat:patapouf@192.168.41.129/heat
heat_metadata_server_url=http://192.168.41.129:8000
heat_waitcondition_server_url=http://192.168.41.129:8000/v1/waitcondition
heat_watch_server_url=http://192.168.41.129:8003
heat_stack_user_role=heat_stack_user
auth_encryption_key=6028f4e9d45cdbbe65d87f545166416e
debug=False
verbose=True
log_dir=/var/log/heat
rpc_backend=heat.openstack.common.rpc.impl_qpid
qpid_hostname=192.168.41.129
qpid_port=5672
qpid_username=guest
qpid_password=guest
qpid_heartbeat=60
qpid_protocol=tcp
qpid_tcp_nodelay=True
qpid_reconnect_limit=0
qpid_reconnect_interval_min=0
qpid_reconnect_interval=0
qpid_reconnect_timeout=0
qpid_reconnect=True
qpid_reconnect_interval_max=0
[ssl]
[database]
[paste_deploy]
[rpc_notifier2]
[ec2authtoken]
auth_uri=http://192.168.41.129:35357/v2.0
keystone_ec2_uri=http://127.0.0.1:5000/v2.0/ec2tokens
[heat_api_cloudwatch]
[heat_api]
bind_host=0.0.0.0
bind_port=8004
[heat_api_cfn]
[auth_password]
[matchmaker_ring]
[matchmaker_redis]
[keystone_authtoken]
admin_tenant_name=services
admin_user=heat
admin_password=patapouf
auth_host=192.168.41.129
auth_port=35357
auth_protocol=http
auth_uri=http://192.168.41.129:35357/v2.0

Création d’une stack simple

Voici quelques liens de documentation au sujet des templates :
Spécification HOT
Création de stack
Guide pour débuter

Nous allons récupérer les informations de base pour créer une instance : l’image, le flavor, le réseau et le sous-réseau

[root@hostnamedab ~(keystone_admin)]# glance image-list
+--------------------------------------+------------+-------------+------------------+----------+--------+
| ID                                   | Name       | Disk Format | Container Format | Size     | Status |
+--------------------------------------+------------+-------------+------------------+----------+--------+
| 38de0608-74fd-47c3-8839-e0d839711171 | cirros-3.2 | qcow2       | bare             | 13167616 | active |
+--------------------------------------+------------+-------------+------------------+----------+--------+
[root@hostnamedab ~(keystone_admin)]# nova flavor-list
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
[root@hostnamedab ~(keystone_admin)]# neutron net-list
+--------------------------------------+-------------+-------------------------------------------------------+
| id                                   | name        | subnets                                               |
+--------------------------------------+-------------+-------------------------------------------------------+
| 00bcfcc4-236e-40bd-ba54-74c85ae0d05e | mynettenant | efab7729-96ca-4b04-9ab7-3fd6d7c1d22b 192.168.165.0/24 |
| 8cce6638-d41f-4b58-8549-2a10f3bf7b06 | public      | 67ddd6df-b592-4d9e-9906-34e93563eb2c 10.6.27.0/24     |
+--------------------------------------+-------------+-------------------------------------------------------+
[root@hostnamedab ~(keystone_admin)]# neutron subnet-list
+--------------------------------------+------+------------------+------------------------------------------------------+
| id                                   | name | cidr             | allocation_pools                                     |
+--------------------------------------+------+------------------+------------------------------------------------------+
| 67ddd6df-b592-4d9e-9906-34e93563eb2c |      | 10.6.27.0/24     | {"start": "10.6.27.150", "end": "10.6.27.249"}       |
| efab7729-96ca-4b04-9ab7-3fd6d7c1d22b |      | 192.168.165.0/24 | {"start": "192.168.165.2", "end": "192.168.165.254"} |
+--------------------------------------+------+------------------+------------------------------------------------------+

Nous avons donc une image cirros, un réseau mynettenant, un sous réseau 192.168.165.0/24 et nous allons utiliser le flavor m1.small.

Le template HOT prends la syntaxe suivante :

[root@hostnamedab ~(keystone_admin)]# cat heat_example.hot
heat_template_version: 2014-03-28

description: Simple template to deploy a single compute instance

resources:
  my_instance:
    type: OS::Nova::Server
    properties:
      image: cirros-3.2
      flavor: m1.small
      networks:
        - port: { get_resource: my_port }

  my_port:
    type: OS::Neutron::Port
    properties:
      network_id: 00bcfcc4-236e-40bd-ba54-74c85ae0d05e
      fixed_ips:
        - subnet_id: efab7729-96ca-4b04-9ab7-3fd6d7c1d22b
[root@hostnamedab ~(keystone_admin)]# heat stack-create my_stack01 --template-file=heat_example.hot
+--------------------------------------+------------+--------------------+----------------------+
| id                                   | stack_name | stack_status       | creation_time        |
+--------------------------------------+------------+--------------------+----------------------+
| a4b488b5-0bed-4bd5-8b92-bce97d61ef19 | my_stack01 | CREATE_IN_PROGRESS | 2014-03-28T15:33:53Z |
+--------------------------------------+------------+--------------------+----------------------+
[root@hostnamedab ~(keystone_admin)]# heat stack-show my_stack01
+----------------------+-----------------------------------------------------------------------------------------------------------------------------------+
| Property             | Value                                                                                                                             |
+----------------------+-----------------------------------------------------------------------------------------------------------------------------------+
| capabilities         | []                                                                                                                                |
| creation_time        | 2014-03-28T16:18:43Z                                                                                                              |
| description          | Simple template to deploy a single compute instance                                                                               |
| disable_rollback     | True                                                                                                                              |
| id                   | 9141a42c-e1bb-4c05-aa67-fde5658f9400                                                                                              |
| links                | http://192.168.41.129:8004/v1/5f8ffb039ce844bc94ba031be85e0936/stacks/my_stack01/9141a42c-e1bb-4c05-aa67-fde5658f9400             |
| notification_topics  | []                                                                                                                                |
| outputs              | []                                                                                                                                |
| parameters           | {                                                                                                                                 |
|                      |   "AWS::StackId": "arn:openstack:heat::5f8ffb039ce844bc94ba031be85e0936:stacks/my_stack01/9141a42c-e1bb-4c05-aa67-fde5658f9400",  |
|                      |   "AWS::Region": "ap-southeast-1",                                                                                                |
|                      |   "AWS::StackName": "my_stack01"                                                                                                  |
|                      | }                                                                                                                                 |
| stack_name           | my_stack01                                                                                                                        |
| stack_status         | CREATE_COMPLETE                                                                                                                   |
| stack_status_reason  | Stack create completed successfully                                                                                               |
| template_description | Simple template to deploy a single compute instance                                                                               |
| timeout_mins         | 60                                                                                                                                |
| updated_time         | 2014-03-28T16:19:29Z                                                                                                              |
+----------------------+-----------------------------------------------------------------------------------------------------------------------------------+

Exemple un peu plus complexe avec paramétrage dans l’instance

TODO : Pas fini

On commence par générer une paire de clé qui sera utilisée par la suite.

[root@hostnamedab ~(keystone_admin)]# nova keypair-add my_keypair > my_keypair.pem
[root@hostnamedab ~(keystone_admin)]# nova keypair-list
+------------+-------------------------------------------------+
| Name       | Fingerprint                                     |
+------------+-------------------------------------------------+
| my_keypair | 5d:41:e8:9f:62:70:52:6b:17:bd:06:31:fb:e7:bf:4d |
+------------+-------------------------------------------------+
[root@hostnamedab ~(keystone_admin)]# cat my_keypair.pem
-----BEGIN RSA PRIVATE KEY-----
MIIEogIBAAKCAQEAwLzO5xVJVTQHxE4L6Z9dE8ZBNJrQ0EiQtW6Ggy82qKnIl0Jp
gODlaXPOMcL9u4pdVaEMB4MVQnEce4noVSGBAC/UhcRkhpzpMqTkdDsXji3u3Q4j
ZZP82JHRWF+MIwB87ahUhUR5D4kzZ1dN01CrhjhTXFIp9FX7HT4Ukgmqs3T1ssqq
muqelc37OW4sqRhmUtHwog7oaCsCOrk6kPaOFLA8WtQQhccTu/4OabNgFpzeOLf9
GjWt6SCcbqK+lC8MGwsAgv1hQZu7sWxGNf3DfwThGfQlxrNDsgZ9O+by/FyifFgL
dfoUaJL9kmHHLwyF8krYM8pLqgEY1+M1MfgC6wIBIwKCAQEAsDeYmMMBN/UOalX8
Q0/gEhRY36N9FjO3gU8b5aeCblIWe6pvFr79oj3+WWHSCo0iImdbdJUawGdqf5QH
nkqEknTfkD6Hy4glqqVjKHCKkJ8GRnNTkBJGQllu8bZxfxZjl3VUlxoISLf0e72I
+7d4QDN/alm/9VXs54k3YPLlNlq3XcivXfkc+cUajgA2RVIMonCXcWTJyVILmSFE
IjBkiXWqrclKLATAjCt8e8SSkFj/lh3QzM/EDjYXAaSVCumBhh6d38Buh53v9csy
GFKN8ZA/yBaBECULmI99FBpW3+V7YKwoIAXcHxNro5kIcUe9/0m21ZgZIpXtFcgz
1kJFiwKBgQDsmom98Npgra5+ca5DQ1OmyKTFevUmpqpCGsgmqBQitTl1u88m5W7G
bZzjol3oSgwyRzUD41FpUZStYlSfxJp3Cg/+3uYrHXOwwMoJz9zFZP+00FSLg+LQ
O/Czt0+PHiAXucBtYyZGbca6H3wG9mtYnoVZjZ/3OPwDEQX0Z0uzyQKBgQDQia1s
20fGNu/nvG+uCfMBuIIFgxOFDU8DcHSxY319aR4T8PDAgDMatRGan/Z3fq9UZVJc
1uEGKADF4oIby6UvZkmE91y9P9NMe8v9m4ay2c9mhsb9GSfuxzjYtpqU2VmL0NtS
L81lOM8Ft/Ze/Lq1hIzerIv48zEadwpIGFPTEwKBgA2FLHFA9okufv/pPShqP0tb
7CiKruxECbqicdZSv1JwwXRxIce1VsmCm0A1KfAEOzYSspKCBKbuuAnoXJtqUfgs
dfFOkM9DgvQoRg8wcwP3JI39Rqji9wSVtfukEyy/5JOkNuGt5O4U5sjOmV97y56p
+P3NlBzBbXyEoUEqeVq7AoGAcTTL3AICZEm1byvHmP4WNCJVNjE2e22f6+yXHnfW
aKbHL2WKAhm1V6Qff9MuBlq280zyMmYFIJlfgV20WDtZrARTy9a9UMrga1klmE0d
PITeTZmfOPBmIoloAJ2kX26s9CDsFqw8TORTIFyNDv45eHsn5pguulgh+GyB7J99
9j0CgYEAyGZMjSWopO6+Ul7BBSHVvB8qW9nGdea4cAhGS+k5e3gRygudciaMu8ed
WiSQiF1CYBmzjCErE5jgyp/INhZFyBsw3MgLwhxpS3hpHcmaqLTh6SlN0ok0eIMC
5vmqvzEfrzWugBLH/Ju2caFyPyGo662Ef7fXVx0TulRSqYL9Cz0=
-----END RSA PRIVATE KEY-----

Voici le template au format HOT. On ajoute l’association d’une floating IP, l’ajout de security groups, l’ajout d’une clé SSH et la modification d’un fichier dans l’instance. Certains paramètres peuvent être choisis au moment du lancement de la stack.

[root@hostnamedab ~(keystone_admin)]# cat heat_example.hot
heat_template_version: 2014-03-31

description: Simple template to deploy a single compute instance and set motd

parameters:
  key_name:
    type: string
    label: Key Name
    description: Name of key-pair to be used for compute instance
  image:
    type: string
    label: Image
    description: Image to be used for compute instance
  instance_type:
    type: string
    label: Instance Type
    description: Type of instance (flavor) to be used
    default: m1.small
    constraints:
      - allowed_values: [m1.small, m1.medium, m1.large]
        description: instance_type must be one of m1.small, m1.medium or m1.large
  motd:
    type: string
    description: Message of the day

resources:
  my_instance:
    type: OS::Nova::Server
    properties:
      key_name: { get_param: key_name }
      image: { get_param: image }
      flavor: { get_param: instance_type }
      networks:
        - port: { get_resource: my_port }
      user_data:
        str_replace:
          template: |
            #!/bin/sh
            echo 'File initialized from Heat template' > /etc/motd
            echo '$motd_from_param' >> /etc/motd
          params:
            $motd_from_param: { get_param: motd }

  my_port:
    type: OS::Neutron::Port
    properties:
      network_id: 00bcfcc4-236e-40bd-ba54-74c85ae0d05e
      fixed_ips:
        - subnet_id: efab7729-96ca-4b04-9ab7-3fd6d7c1d22b
      security_groups: [ 9aa64316-fb7b-4449-8d75-f3246e15bcb9, 8dab352c-55c9-4a34-a482-5cdab5bdb743 ]

  my_floating_ip:
    type: OS::Neutron::FloatingIP
    properties:
      floating_network_id: 8cce6638-d41f-4b58-8549-2a10f3bf7b06
      port_id: { get_resource: my_port }

On peut maintenant lancer la stack en passant les paramètres à Heat.

[root@hostnamedab ~(keystone_admin)]# heat stack-create my_stack1 --template-file=heat_example.hot --parameters="key_name=my_keypair;instance_type=m1.small;image=fedora20;motd=Patapouf"
+--------------------------------------+------------+--------------------+----------------------+
| id                                   | stack_name | stack_status       | creation_time        |
+--------------------------------------+------------+--------------------+----------------------+
| 10ade449-82e1-4074-ba29-01dcb09dbf2b | my_stack2  | CREATE_IN_PROGRESS | 2014-04-03T15:43:45Z |
+--------------------------------------+------------+--------------------+----------------------+
No Comments on OpenStack : Exemple d’utilisation pour présenter Heat

OpenStack : Exemple d’utilisation pour présenter Glance

Written by admin on  Categories: OpenStack Tags: , , , , , ,

Glance est la stack de gestion des images (équivalent des templates chez VMWare) sur OpenStack. Son rôle va être de stocker, organiser et fournir les images systèmes aux instances.

Une image glance est un système d’exploitation qui peut être de différente nature et stocké de différente manière. On peut ainsi retrouver des imges Windows Server en format RAW, des images RHEL au format QCOW2 ou encore un Linux Live au format ISO. Associé à ce système, certaines caractéristiques minimale de l’enveloppe peuvent y être ajoutée.

Glance peut supporter plusieurs type de backend pour son stockage comme par exemple : des File System, de l’Object Storage (Swift), S3, HTTP (Read Only), RDB, GridFS.

Pour plus de détails, voir directement le projet sur openstack.org.

Contexte de l’exemple

Dans le cadre d’un POC, nous allons essentiellement nous servir de machines virtuelles x86.

  +-------------------------+           +--------------------------+
  |                         |           |                          |
  |                         |           |                          |
  |     controller01        |           |       compute01          |
  |                         |           |                          |
  |                         |           |                          |
  |  eth0     eth2    eth3  |           |  eth3    eth2      eth0  |
  +-------------------------+           +--------------------------+
      |        |       |                    |       |         |
      |        |       |                    |       |         |
      |        |       |   Réseau "privé"   |       |         |
      |        |       +--------------------+       |         |
      |        |           Réseau "public"          |         |
      |        +------------------------------------+         |
      |                    Réseau "admin"                     |
      +-------------------------------------------------------+

L’exemple que nous allons voir sera très simple, il consistera créer une image dans glance et l’utiliser en démarrant une instance. Nous verrons également comment créer une image en utilisant la fonctionnalité de snapshoting de cinder.

Pour faciliter le déroulement, les opérations seront effectuées avec le compte admin même si certaines pourraient être faite en tant que simple user du tenant “admin”.

Prérequis

Comme les méthodes de déploiement d’OpenStack varient beaucoup, nous n’allons pas détailler ici comment mettre en place le service. Nous partons du principe que le service est “up and running”. Nous avons donc un service qui fonctionne avec les stack suivantes : keystone, glance, nova, horizon et neutron.
Nous avons également les droits d’administration du service chargés dans l’environnement de la ligne de commande.

[root@hostnamedab ~(keystone_admin)]# keystone service-list
+----------------------------------+------------+----------------+----------------------------+
|                id                |    name    |      type      |        description         |
+----------------------------------+------------+----------------+----------------------------+
| b0bee0b0e9f34f8bafd4ba7d54ba3d6e | ceilometer |    metering    | Openstack Metering Service |
| 2a06e498c2b84cb48ebd578f6fa48297 |   cinder   |     volume     |       Cinder Service       |
| 14fa9ec07e34443bba5daac33266671f | cinder_v2  |    volumev2    |     Cinder Service v2      |
| 1f4e441ee6d5489281d3aa8d64e2a746 |   glance   |     image      |  Openstack Image Service   |
| d189a66300e04e9b8ac8cacad3eca3a1 |    heat    | orchestration  |          Heat API          |
| f96774576d8846d7bdd04ec9ccefabb5 |  heat-cfn  | cloudformation |  Heat CloudFormation API   |
| 9365681a0e3945e2806e83d85b8319bf |  keystone  |    identity    | OpenStack Identity Service |
| f13396b4b11c45baa59f9de685f25020 |  neutron   |    network     | Neutron Networking Service |
| 6cf6626654b04b89a988483fb566508d |    nova    |    compute     | Openstack Compute Service  |
| f783eff435804e449d529ef6d03745bf |  nova_ec2  |      ec2       |        EC2 Service         |
+----------------------------------+------------+----------------+----------------------------+
[root@hostnamedab ~(keystone_admin)]# nova service-list
+------------------+-------------+----------+---------+-------+----------------------------+-----------------+
| Binary           | Host        | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+------------------+-------------+----------+---------+-------+----------------------------+-----------------+
| nova-consoleauth | hostnamedab | internal | enabled | up    | 2014-02-26T14:29:25.000000 | None            |
| nova-scheduler   | hostnamedab | internal | enabled | up    | 2014-02-26T14:29:25.000000 | None            |
| nova-conductor   | hostnamedab | internal | enabled | up    | 2014-02-26T14:29:24.000000 | None            |
| nova-cert        | hostnamedab | internal | enabled | up    | 2014-02-26T14:29:25.000000 | None            |
| nova-compute     | hostnamedbj | nova     | enabled | up    | 2014-02-26T14:29:28.000000 | None            |
| nova-console     | hostnamedab | internal | enabled | down  | 2014-02-26T09:30:20.000000 | None            |
+------------------+-------------+----------+---------+-------+----------------------------+-----------------+

Composition de la stack Glance

Glance est un des services les plus simples d’OpenStack. Il est constitué d’un service d’API (glance-api) et d’un service de catalogue (glance-registry).

Configuration

Ce n’est pas le lieu ici pour discuter de la configuration, mais voici tout de même un extrait des éléments les plus importants pour permettre de resituer le contexte.

[root@hostnamedab ~(keystone_admin)]# cat /etc/glance/glance-registry.conf | grep -v "^#" |grep -v "^$"
[DEFAULT]
verbose=False
debug=False
bind_host = 0.0.0.0
bind_port = 9191
backlog = 4096
sql_connection=mysql://glance:patapouf@192.168.41.129/glance
sql_idle_timeout = 3600
api_limit_max = 1000
limit_param_default = 25
use_syslog = False
[keystone_authtoken]
auth_host=192.168.41.129
auth_port=35357
auth_protocol=http
admin_tenant_name=services
admin_user=glance
admin_password=patapouf
auth_uri=http://192.168.41.129:5000/
[paste_deploy]
flavor=keystone
[root@hostnamedab ~(keystone_admin)]# cat /etc/glance/glance-cache.conf | grep -v "^#" |grep -v "^$"
[DEFAULT]
verbose=False
debug=False
image_cache_stall_time = 86400
image_cache_invalid_entry_grace_period = 3600
image_cache_max_size = 10737418240
registry_host = 0.0.0.0
registry_port = 9191
auth_url = http://localhost:5000/v2.0
admin_tenant_name = services
admin_user = glance
admin_password = patapouf
filesystem_store_datadir = /var/lib/glance/images/
swift_store_auth_version = 2
swift_store_auth_address = 127.0.0.1:5000/v2.0/
swift_store_user = jdoe:jdoe
swift_store_key = a86850deb2742ec3cb41518e26aa2d89
swift_store_container = glance
swift_store_create_container_on_put = False
swift_store_large_object_size = 5120
swift_store_large_object_chunk_size = 200
swift_enable_snet = False
s3_store_host = 127.0.0.1:8080/v1.0/
s3_store_access_key = 
s3_store_secret_key = 
s3_store_bucket = glance
s3_store_create_bucket_on_put = False

Exemple 1 : Création d’une image par simple upload

Récupération de l’image cirros. Cette image est souvent utilisés dans les tests sur OpenStack car elle est très légère et fournie les fonctionnalités de bases pour les tests.

[root@hostnamedab ~(keystone_admin)]# wget -q http://download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img

Upload de l’image.

[root@hostnamedab ~(keystone_admin)]# glance image-create --name cirros-3.2 --disk-format qcow2 --container-format bare --is-public True --file cirros-0.3.2-x86_64-disk.img
+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| checksum         | 64d7c1cd2b6f60c92c14662941cb7913     |
| container_format | bare                                 |
| created_at       | 2014-03-24T15:11:13                  |
| deleted          | False                                |
| deleted_at       | None                                 |
| disk_format      | qcow2                                |
| id               | 38de0608-74fd-47c3-8839-e0d839711171 |
| is_public        | True                                 |
| min_disk         | 0                                    |
| min_ram          | 0                                    |
| name             | cirros-3.2                           |
| owner            | 5f8ffb039ce844bc94ba031be85e0936     |
| protected        | False                                |
| size             | 13167616                             |
| status           | active                               |
| updated_at       | 2014-03-24T15:11:14                  |
+------------------+--------------------------------------+
[root@hostnamedab ~(keystone_admin)]# glance image-list
+--------------------------------------+------------+-------------+------------------+----------+--------+
| ID                                   | Name       | Disk Format | Container Format | Size     | Status |
+--------------------------------------+------------+-------------+------------------+----------+--------+
| 38de0608-74fd-47c3-8839-e0d839711171 | cirros-3.2 | qcow2       | bare             | 13167616 | active |
+--------------------------------------+------------+-------------+------------------+----------+--------+
[root@hostnamedab ~(keystone_admin)]# glance image-show cirros-3.2
+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| checksum         | 64d7c1cd2b6f60c92c14662941cb7913     |
| container_format | bare                                 |
| created_at       | 2014-03-24T15:11:13                  |
| deleted          | False                                |
| disk_format      | qcow2                                |
| id               | 38de0608-74fd-47c3-8839-e0d839711171 |
| is_public        | True                                 |
| min_disk         | 0                                    |
| min_ram          | 0                                    |
| name             | cirros-3.2                           |
| owner            | 5f8ffb039ce844bc94ba031be85e0936     |
| protected        | False                                |
| size             | 13167616                             |
| status           | active                               |
| updated_at       | 2014-03-24T15:11:14                  |
+------------------+--------------------------------------+

Il est possible de mettre à jour les propriétés.

[root@hostnamedab ~(keystone_admin)]# glance image-update --min-disk 5 --min-ram 1024 cirros-3.2
+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| checksum         | 64d7c1cd2b6f60c92c14662941cb7913     |
| container_format | bare                                 |
| created_at       | 2014-03-24T15:11:13                  |
| deleted          | False                                |
| deleted_at       | None                                 |
| disk_format      | qcow2                                |
| id               | 38de0608-74fd-47c3-8839-e0d839711171 |
| is_public        | True                                 |
| min_disk         | 5                                    |
| min_ram          | 1024                                 |
| name             | cirros-3.2                           |
| owner            | 5f8ffb039ce844bc94ba031be85e0936     |
| protected        | False                                |
| size             | 13167616                             |
| status           | active                               |
| updated_at       | 2014-03-24T15:29:20                  |
+------------------+--------------------------------------+

La même image est visible depuis nova

[root@hostnamedab ~(keystone_admin)]# nova image-list
+--------------------------------------+------------+--------+--------+
| ID                                   | Name       | Status | Server |
+--------------------------------------+------------+--------+--------+
| 38de0608-74fd-47c3-8839-e0d839711171 | cirros-3.2 | ACTIVE |        |
+--------------------------------------+------------+--------+--------+
[root@hostnamedab ~(keystone_admin)]# nova image-show cirros-3.2
+----------------------+--------------------------------------+
| Property             | Value                                |
+----------------------+--------------------------------------+
| status               | ACTIVE                               |
| updated              | 2014-03-24T15:29:20Z                 |
| name                 | cirros-3.2                           |
| created              | 2014-03-24T15:11:13Z                 |
| minDisk              | 5                                    |
| progress             | 100                                  |
| minRam               | 1024                                 |
| OS-EXT-IMG-SIZE:size | 13167616                             |
| id                   | 38de0608-74fd-47c3-8839-e0d839711171 |
+----------------------+--------------------------------------+

Maintenant que tout est près, on peux démarrer une instance à partir de cette image.

[root@hostnamedab ~(keystone_admin)]# nova boot --flavor m1.small --image cirros-3.2 --security-groups allowall --nic net-id=00bcfcc4-236e-40bd-ba54-74c85ae0d05e instance2
+--------------------------------------+--------------------------------------+
| Property                             | Value                                |
+--------------------------------------+--------------------------------------+
| OS-EXT-STS:task_state                | scheduling                           |
| image                                | cirros-3.2                           |
| OS-EXT-STS:vm_state                  | building                             |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000010                    |
| OS-SRV-USG:launched_at               | None                                 |
| flavor                               | m1.small                             |
| id                                   | d49d6328-289d-4a17-a3d6-7847dfe3fdec |
| security_groups                      | [{u'name': u'allowall'}]             |
| user_id                              | ab1435cbeb5d46829299525fc4b37c7d     |
| OS-DCF:diskConfig                    | MANUAL                               |
| accessIPv4                           |                                      |
| accessIPv6                           |                                      |
| progress                             | 0                                    |
| OS-EXT-STS:power_state               | 0                                    |
| OS-EXT-AZ:availability_zone          | nova                                 |
| config_drive                         |                                      |
| status                               | BUILD                                |
| updated                              | 2014-03-24T16:07:51Z                 |
| hostId                               |                                      |
| OS-EXT-SRV-ATTR:host                 | None                                 |
| OS-SRV-USG:terminated_at             | None                                 |
| key_name                             | None                                 |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | None                                 |
| name                                 | instance2                            |
| adminPass                            | TL6U3BvSo8PQ                         |
| tenant_id                            | 5f8ffb039ce844bc94ba031be85e0936     |
| created                              | 2014-03-24T16:07:51Z                 |
| os-extended-volumes:volumes_attached | []                                   |
| metadata                             | {}                                   |
+--------------------------------------+--------------------------------------+
[root@hostnamedab ~(keystone_admin)]# nova show instance2
+--------------------------------------+----------------------------------------------------------+
| Property                             | Value                                                    |
+--------------------------------------+----------------------------------------------------------+
| status                               | ACTIVE                                                   |
| updated                              | 2014-03-24T16:08:58Z                                     |
| OS-EXT-STS:task_state                | None                                                     |
| OS-EXT-SRV-ATTR:host                 | hostnamedbj                                              |
| key_name                             | None                                                     |
| image                                | cirros-3.2 (38de0608-74fd-47c3-8839-e0d839711171)        |
| hostId                               | 67a93b4953c7cf7ac992a4c27f8551f70aa7e113df364523a225460f |
| OS-EXT-STS:vm_state                  | active                                                   |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000010                                        |
| OS-SRV-USG:launched_at               | 2014-03-24T16:08:58.000000                               |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | hostnamedbj.dsit.sncf.fr                                 |
| flavor                               | m1.small (2)                                             |
| id                                   | d49d6328-289d-4a17-a3d6-7847dfe3fdec                     |
| security_groups                      | [{u'name': u'allowall'}]                                 |
| OS-SRV-USG:terminated_at             | None                                                     |
| user_id                              | ab1435cbeb5d46829299525fc4b37c7d                         |
| name                                 | instance2                                                |
| created                              | 2014-03-24T16:07:51Z                                     |
| mynettenant network                  | 192.168.165.2                                            |
| tenant_id                            | 5f8ffb039ce844bc94ba031be85e0936                         |
| OS-DCF:diskConfig                    | MANUAL                                                   |
| metadata                             | {}                                                       |
| os-extended-volumes:volumes_attached | []                                                       |
| accessIPv4                           |                                                          |
| accessIPv6                           |                                                          |
| progress                             | 0                                                        |
| OS-EXT-STS:power_state               | 1                                                        |
| OS-EXT-AZ:availability_zone          | nova                                                     |
| config_drive                         |                                                          |
+--------------------------------------+----------------------------------------------------------+

Exemple 2 : Création d’une image depuis un volume

Il arrive souvent que le moyen le plus simple pour créer une image pour glance soit d’installer un système de manière classique, de le paramétrer avec tout les prérequis et de convertir le disque en image. Cinder permet de convertir directement un volume en image glance lorsque tout est près coté système guest.

[root@hostnamedab ~(keystone_admin)]# cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
|                  ID                  |   Status  | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| a3ac8866-2af5-4372-914e-e0546f8212d6 | available |   my_disk    |  1   |     None    |  false   |             |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
[root@hostnamedab ~(keystone_admin)]# cinder upload-to-image --disk-format qcow2 a3ac8866-2af5-4372-914e-e0546f8212d6 my_image
+---------------------+--------------------------------------+
|       Property      |                Value                 |
+---------------------+--------------------------------------+
|   container_format  |                 bare                 |
|     disk_format     |                qcow2                 |
| display_description |                 None                 |
|          id         | a3ac8866-2af5-4372-914e-e0546f8212d6 |
|       image_id      | d9fd5a31-9756-4172-9736-5ab088b81b25 |
|      image_name     |               my_image               |
|         size        |                  1                   |
|        status       |              uploading               |
|      updated_at     |      2014-03-25T09:22:29.000000      |
|     volume_type     |                 None                 |
+---------------------+--------------------------------------+
[root@hostnamedab ~(keystone_admin)]# glance image-show my_image
+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| checksum         | 739eb10663a518363f6d007e2adcfacc     |
| container_format | bare                                 |
| created_at       | 2014-03-25T14:54:22                  |
| deleted          | False                                |
| disk_format      | qcow2                                |
| id               | d9fd5a31-9756-4172-9736-5ab088b81b25 |
| is_public        | False                                |
| min_disk         | 0                                    |
| min_ram          | 0                                    |
| name             | my_image                             |
| owner            | 5f8ffb039ce844bc94ba031be85e0936     |
| protected        | False                                |
| size             | 393216                               |
| status           | active                               |
| updated_at       | 2014-03-25T14:54:30                  |
+------------------+--------------------------------------+
No Comments on OpenStack : Exemple d’utilisation pour présenter Glance

OpenStack : Exemple d’utilisation pour présenter Cinder

Written by admin on  Categories: OpenStack Tags: , , , , , ,

Cinder est la stack de gestion du stockage en mode bloc sur OpenStack. C’est l’implémentation du SDS (Software Defined Storage) dans le projet open source d’IaaS.
Son rôle est donc de communiquer avec différentes briques de stockage pour les assembler et les rendre pilotable de manière logicielle. Pour plus de détails, voir directement le projet sur openstack.org.

Contexte de l’exemple

Dans le cadre d’un POC, nous allons essentiellement nous servir de machines virtuelles x86. Nous n’allons pas utiliser les fonctionnalité de pilotage de materiel baie EMC, Netapp ou autre.

  +-------------------------+           +--------------------------+
  |                         |           |                          |
  |                         |           |                          |
  |     controller01        |           |       compute01          |
  |                         |           |                          |
  |                         |           |                          |
  |  eth0     eth2    eth3  |           |  eth3    eth2      eth0  |
  +-------------------------+           +--------------------------+
      |        |       |                    |       |         |
      |        |       |                    |       |         |
      |        |       |   Réseau "privé"   |       |         |
      |        |       +--------------------+       |         |
      |        |           Réseau "public"          |         |
      |        +------------------------------------+         |
      |                    Réseau "admin"                     |
      +-------------------------------------------------------+

L’exemple que nous allons voir sera très simple, il consistera à raccorder un volume à une instance et à générer un snapshot.

Pour faciliter le déroulement, les opérations seront effectuées avec le compte admin même si certaines pourraient être faite en tant que simple user du tenant “admin”.

Prérequis

Comme les méthodes de déploiement d’OpenStack varient beaucoup, nous n’allons pas détailler ici comment mettre en place le service. Nous partons du principe que le service est “up and running”. Nous avons donc un service qui fonctionne avec les stack suivantes : keystone, glance, nova, horizon, cinder et neutron.
Nous avons également les droits d’administration du service chargés dans l’environnement de la ligne de commande.

[root@hostnamedab ~(keystone_admin)]# keystone service-list
+----------------------------------+------------+----------------+----------------------------+
|                id                |    name    |      type      |        description         |
+----------------------------------+------------+----------------+----------------------------+
| b0bee0b0e9f34f8bafd4ba7d54ba3d6e | ceilometer |    metering    | Openstack Metering Service |
| 2a06e498c2b84cb48ebd578f6fa48297 |   cinder   |     volume     |       Cinder Service       |
| 14fa9ec07e34443bba5daac33266671f | cinder_v2  |    volumev2    |     Cinder Service v2      |
| 1f4e441ee6d5489281d3aa8d64e2a746 |   glance   |     image      |  Openstack Image Service   |
| d189a66300e04e9b8ac8cacad3eca3a1 |    heat    | orchestration  |          Heat API          |
| f96774576d8846d7bdd04ec9ccefabb5 |  heat-cfn  | cloudformation |  Heat CloudFormation API   |
| 9365681a0e3945e2806e83d85b8319bf |  keystone  |    identity    | OpenStack Identity Service |
| f13396b4b11c45baa59f9de685f25020 |  neutron   |    network     | Neutron Networking Service |
| 6cf6626654b04b89a988483fb566508d |    nova    |    compute     | Openstack Compute Service  |
| f783eff435804e449d529ef6d03745bf |  nova_ec2  |      ec2       |        EC2 Service         |
+----------------------------------+------------+----------------+----------------------------+
[root@hostnamedab ~(keystone_admin)]# nova service-list
+------------------+-------------+----------+---------+-------+----------------------------+-----------------+
| Binary           | Host        | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+------------------+-------------+----------+---------+-------+----------------------------+-----------------+
| nova-consoleauth | hostnamedab | internal | enabled | up    | 2014-02-26T14:29:25.000000 | None            |
| nova-scheduler   | hostnamedab | internal | enabled | up    | 2014-02-26T14:29:25.000000 | None            |
| nova-conductor   | hostnamedab | internal | enabled | up    | 2014-02-26T14:29:24.000000 | None            |
| nova-cert        | hostnamedab | internal | enabled | up    | 2014-02-26T14:29:25.000000 | None            |
| nova-compute     | hostnamedbj | nova     | enabled | up    | 2014-02-26T14:29:28.000000 | None            |
| nova-console     | hostnamedab | internal | enabled | down  | 2014-02-26T09:30:20.000000 | None            |
+------------------+-------------+----------+---------+-------+----------------------------+-----------------+

Composition de la stack Cinder

Cinder est composé de plusieurs parties :

  •   cinder-api est en charge de présenter une interface RESTFUL
  •   cinder-scheduler permet de répartir les requêtes suivant certaines règles
  •   cinder-volumes est le service qui pilote les block devices, il est donc en charge du raccordement avec les différents backends
  •   cinder-backup fournis un service de backup qui permet de sauvegarder un volume dans un backend adéquat, on utilisera souvent swift

Configuration

Ce n’est pas le lieu ici pour discuter de la configuration, mais voici tout de même un extrait des éléments les plus importants pour permettre de resituer le contexte.

[root@hostnamedab ~(keystone_admin)]# cat /etc/cinder/cinder.conf | grep -v "^#" |grep -v "^$"
[DEFAULT]
osapi_volume_listen=0.0.0.0
api_paste_config=/etc/cinder/api-paste.ini
glance_host=192.168.41.129
auth_strategy=keystone
debug=False
verbose=False
use_syslog=False
rpc_backend=cinder.openstack.common.rpc.impl_qpid
control_exchange=cinder
qpid_hostname=192.168.41.129
qpid_port=5672
qpid_username=guest
qpid_password=guest
qpid_heartbeat=60
qpid_protocol=tcp
qpid_tcp_nodelay=True
iscsi_ip_address=192.168.41.129
iscsi_helper=tgtadm
volume_group=cinder-volumes
sql_connection=mysql://cinder:patapouf@192.168.41.129/cinder
qpid_reconnect_timeout=0
[root@hostnamedab ~(keystone_admin)]# cat /etc/tgt/targets.conf | grep -v "^#" |grep -v "^$"
include /etc/cinder/volumes/*
default-driver iscsi
qpid_reconnect_limit=0
qpid_reconnect=True
qpid_reconnect_interval_max=0
qpid_reconnect_interval_min=0
sql_idle_timeout=3600
qpid_reconnect_interval=0
notification_driver=cinder.openstack.common.notifier.rpc_notifier

Exemple 1 : Raccordement d’un volume à une instance

Création d’un volume de 1G.

[root@hostnamedab ~(keystone_admin)]# cinder create --display_name my_disk 1
+---------------------+--------------------------------------+
|       Property      |                Value                 |
+---------------------+--------------------------------------+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2014-03-24T16:31:31.006458      |
| display_description |                 None                 |
|     display_name    |               my_disk                |
|          id         | a3ac8866-2af5-4372-914e-e0546f8212d6 |
|       metadata      |                  {}                  |
|         size        |                  1                   |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |                 None                 |
+---------------------+--------------------------------------+
[root@hostnamedab ~(keystone_admin)]# cinder show my_disk
+--------------------------------+--------------------------------------+
|            Property            |                Value                 |
+--------------------------------+--------------------------------------+
|          attachments           |                  []                  |
|       availability_zone        |                 nova                 |
|            bootable            |                false                 |
|           created_at           |      2014-03-24T16:31:31.000000      |
|      display_description       |                 None                 |
|          display_name          |               my_disk                |
|               id               | a3ac8866-2af5-4372-914e-e0546f8212d6 |
|            metadata            |                  {}                  |
|     os-vol-host-attr:host      |             hostnamedab              |
| os-vol-mig-status-attr:migstat |                 None                 |
| os-vol-mig-status-attr:name_id |                 None                 |
|  os-vol-tenant-attr:tenant_id  |   5f8ffb039ce844bc94ba031be85e0936   |
|              size              |                  1                   |
|          snapshot_id           |                 None                 |
|          source_volid          |                 None                 |
|             status             |              available               |
|          volume_type           |                 None                 |
+--------------------------------+--------------------------------------+

On liste les instance pour trouver ou raccorder notre volume

[root@hostnamedab ~(keystone_admin)]# nova list
+--------------------------------------+-----------+--------+------------+-------------+---------------------------+
| ID                                   | Name      | Status | Task State | Power State | Networks                  |
+--------------------------------------+-----------+--------+------------+-------------+---------------------------+
| d49d6328-289d-4a17-a3d6-7847dfe3fdec | instance2 | ACTIVE | None       | Running     | mynettenant=192.168.165.2 |
+--------------------------------------+-----------+--------+------------+-------------+---------------------------+

On attache le volume à l’instance “instance2”.

[root@hostnamedab ~(keystone_admin)]# nova volume-attach instance2 a3ac8866-2af5-4372-914e-e0546f8212d6 auto
+----------+--------------------------------------+
| Property | Value                                |
+----------+--------------------------------------+
| device   | /dev/vdb                             |
| serverId | d49d6328-289d-4a17-a3d6-7847dfe3fdec |
| id       | a3ac8866-2af5-4372-914e-e0546f8212d6 |
| volumeId | a3ac8866-2af5-4372-914e-e0546f8212d6 |
+----------+--------------------------------------+

On vérifie que le volume soit bien rattaché à l’instance.

[root@hostnamedab ~(keystone_admin)]# nova show instance2
+--------------------------------------+----------------------------------------------------------+
| Property                             | Value                                                    |
+--------------------------------------+----------------------------------------------------------+
| status                               | ACTIVE                                                   |
| updated                              | 2014-03-24T16:08:58Z                                     |
| OS-EXT-STS:task_state                | None                                                     |
| OS-EXT-SRV-ATTR:host                 | hostnamedbj                                              |
| key_name                             | None                                                     |
| image                                | cirros-3.2 (38de0608-74fd-47c3-8839-e0d839711171)        |
| hostId                               | 67a93b4953c7cf7ac992a4c27f8551f70aa7e113df364523a225460f |
| OS-EXT-STS:vm_state                  | active                                                   |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000010                                        |
| OS-SRV-USG:launched_at               | 2014-03-24T16:08:58.000000                               |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | hostnamedbj.dsit.sncf.fr                                 |
| flavor                               | m1.small (2)                                             |
| id                                   | d49d6328-289d-4a17-a3d6-7847dfe3fdec                     |
| security_groups                      | [{u'name': u'allowall'}]                                 |
| OS-SRV-USG:terminated_at             | None                                                     |
| user_id                              | ab1435cbeb5d46829299525fc4b37c7d                         |
| name                                 | instance2                                                |
| created                              | 2014-03-24T16:07:51Z                                     |
| mynettenant network                  | 192.168.165.2                                            |
| tenant_id                            | 5f8ffb039ce844bc94ba031be85e0936                         |
| OS-DCF:diskConfig                    | MANUAL                                                   |
| metadata                             | {}                                                       |
| os-extended-volumes:volumes_attached | [{u'id': u'a3ac8866-2af5-4372-914e-e0546f8212d6'}]       |
| accessIPv4                           |                                                          |
| accessIPv6                           |                                                          |
| progress                             | 0                                                        |
| OS-EXT-STS:power_state               | 1                                                        |
| OS-EXT-AZ:availability_zone          | nova                                                     |
| config_drive                         |                                                          |
+--------------------------------------+----------------------------------------------------------+

Petite vérification sur l’instance.

# dmesg
...
[ 1582.110496] pci 0000:00:06.0: [1af4:1001] type 0 class 0x000100
[ 1582.110920] pci 0000:00:06.0: reg 10: [io  0x0000-0x003f]
[ 1582.110999] pci 0000:00:06.0: reg 14: [mem 0x00000000-0x00000fff]
[ 1582.121229] pci 0000:00:06.0: BAR 1: assigned [mem 0x80000000-0x80000fff]
[ 1582.121591] pci 0000:00:06.0: BAR 1: set to [mem 0x80000000-0x80000fff] (PCI address [0x80000000-0x80000fff])
[ 1582.121708] pci 0000:00:06.0: BAR 0: assigned [io  0x1000-0x103f]
[ 1582.121761] pci 0000:00:06.0: BAR 0: set to [io  0x1000-0x103f] (PCI address [0x1000-0x103f])
[ 1582.123504] pci 0000:00:00.0: no hotplug settings from platform
[ 1582.124831] pci 0000:00:00.0: using default PCI settings
[ 1582.125182] pci 0000:00:01.0: no hotplug settings from platform
[ 1582.125716] pci 0000:00:01.0: using default PCI settings
[ 1582.125798] ata_piix 0000:00:01.1: no hotplug settings from platform
[ 1582.126316] ata_piix 0000:00:01.1: using default PCI settings
[ 1582.126454] uhci_hcd 0000:00:01.2: no hotplug settings from platform
[ 1582.126941] uhci_hcd 0000:00:01.2: using default PCI settings
[ 1582.127020] pci 0000:00:01.3: no hotplug settings from platform
[ 1582.127587] pci 0000:00:01.3: using default PCI settings
[ 1582.127671] pci 0000:00:02.0: no hotplug settings from platform
[ 1582.128260] pci 0000:00:02.0: using default PCI settings
[ 1582.128697] virtio-pci 0000:00:03.0: no hotplug settings from platform
[ 1582.129373] virtio-pci 0000:00:03.0: using default PCI settings
[ 1582.129464] virtio-pci 0000:00:04.0: no hotplug settings from platform
[ 1582.129973] virtio-pci 0000:00:04.0: using default PCI settings
[ 1582.130110] virtio-pci 0000:00:05.0: no hotplug settings from platform
[ 1582.130658] virtio-pci 0000:00:05.0: using default PCI settings
[ 1582.130742] pci 0000:00:06.0: no hotplug settings from platform
[ 1582.131172] pci 0000:00:06.0: using default PCI settings
[ 1582.161881] virtio-pci 0000:00:06.0: enabling device (0000 -> 0003)
[ 1582.166882] ACPI: PCI Interrupt Link [LNKB] enabled at IRQ 11
[ 1582.167771] virtio-pci 0000:00:06.0: PCI INT A -> Link[LNKB] -> GSI 11 (level, high) -> IRQ 11
[ 1582.167945] virtio-pci 0000:00:06.0: setting latency timer to 64
[ 1582.171272] virtio-pci 0000:00:06.0: irq 45 for MSI/MSI-X
[ 1582.171440] virtio-pci 0000:00:06.0: irq 46 for MSI/MSI-X
[ 1582.209451]  vdb: unknown partition table

Exemple 2 : Utilisation d’un snapshot

Sur le disque raccordé plus haut, nous allons mettre une marque avant le snapshot et une après et vérifier le rollback.

# echo "Avant snapshot" > /dev/vdb
# dd if=/dev/vdb bs=1 count=14
Avant snapshot14+0 records in
14+0 records out

Maintenant on prend le snapshot depuis cinder. L’option –force=True est présente pour obliger cinder à faire le snapshot alors que le volume est toujours attaché.

[root@hostnamedab ~(keystone_admin)]# cinder snapshot-create --force True --display_name my_snap a3ac8866-2af5-4372-914e-e0546f8212d6
+---------------------+--------------------------------------+
|       Property      |                Value                 |
+---------------------+--------------------------------------+
|      created_at     |      2014-03-24T17:00:25.619336      |
| display_description |                 None                 |
|     display_name    |               my_snap                |
|          id         | 074dedff-4e7d-45d4-ad7e-ff529886dbf3 |
|       metadata      |                  {}                  |
|         size        |                  1                   |
|        status       |               creating               |
|      volume_id      | a3ac8866-2af5-4372-914e-e0546f8212d6 |
+---------------------+--------------------------------------+
[root@hostnamedab ~(keystone_admin)]# cinder snapshot-list
+--------------------------------------+--------------------------------------+-----------+--------------+------+
|                  ID                  |              Volume ID               |   Status  | Display Name | Size |
+--------------------------------------+--------------------------------------+-----------+--------------+------+
| 074dedff-4e7d-45d4-ad7e-ff529886dbf3 | a3ac8866-2af5-4372-914e-e0546f8212d6 | available |   my_snap    |  1   |
+--------------------------------------+--------------------------------------+-----------+--------------+------+

On fait une nouvelle écriture.

# echo "Apres snapshot" > /dev/vdb
# dd if=/dev/vdb bs=1 count=14
Apres snapshot14+0 records in
14+0 records out

Pour retourner au snapshot, il faut créer un volume depuis le snapshot et remplacer l’ancien volume de l’instance par celui créé.

[root@hostnamedab ~(keystone_admin)]# nova volume-detach instance2 a3ac8866-2af5-4372-914e-e0546f8212d6
[root@hostnamedab ~(keystone_admin)]# cinder create --snapshot-id 074dedff-4e7d-45d4-ad7e-ff529886dbf3 --display-name my_disk_from_snap 1
+---------------------+--------------------------------------+
|       Property      |                Value                 |
+---------------------+--------------------------------------+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2014-03-25T09:51:29.181605      |
| display_description |                 None                 |
|     display_name    |          my_disk_from_snap           |
|          id         | 695e15df-90f5-46e1-b8c5-e4118d60acdb |
|       metadata      |                  {}                  |
|         size        |                  1                   |
|     snapshot_id     | 074dedff-4e7d-45d4-ad7e-ff529886dbf3 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |                 None                 |
+---------------------+--------------------------------------+
[root@hostnamedab ~(keystone_admin)]# cinder show my_disk_from_snap
+--------------------------------+--------------------------------------+
|            Property            |                Value                 |
+--------------------------------+--------------------------------------+
|          attachments           |                  []                  |
|       availability_zone        |                 nova                 |
|            bootable            |                false                 |
|           created_at           |      2014-03-25T09:51:29.000000      |
|      display_description       |                 None                 |
|          display_name          |          my_disk_from_snap           |
|               id               | 695e15df-90f5-46e1-b8c5-e4118d60acdb |
|            metadata            |                  {}                  |
|     os-vol-host-attr:host      |             hostnamedab              |
| os-vol-mig-status-attr:migstat |                 None                 |
| os-vol-mig-status-attr:name_id |                 None                 |
|  os-vol-tenant-attr:tenant_id  |   5f8ffb039ce844bc94ba031be85e0936   |
|              size              |                  1                   |
|          snapshot_id           | 074dedff-4e7d-45d4-ad7e-ff529886dbf3 |
|          source_volid          |                 None                 |
|             status             |              available               |
|          volume_type           |                 None                 |
+--------------------------------+--------------------------------------+
[root@hostnamedab ~(keystone_admin)]# nova volume-attach instance2 695e15df-90f5-46e1-b8c5-e4118d60acdb auto
+----------+--------------------------------------+
| Property | Value                                |
+----------+--------------------------------------+
| device   | /dev/vdb                             |
| serverId | d49d6328-289d-4a17-a3d6-7847dfe3fdec |
| id       | 695e15df-90f5-46e1-b8c5-e4118d60acdb |
| volumeId | 695e15df-90f5-46e1-b8c5-e4118d60acdb |
+----------+--------------------------------------+

On peut maintenant vérifier dans l’instance que le disque est bien présent et qu’on retrouve bien le message “avant snapshot”.

# dmesg 
...
[60197.575691] pci 0000:00:05.0: no hotplug settings from platform
[60197.575983] pci 0000:00:05.0: using default PCI settings
[60197.596853] virtio-pci 0000:00:05.0: enabling device (0000 -> 0003)
[60197.597769] virtio-pci 0000:00:05.0: PCI INT A -> Link[LNKA] -> GSI 10 (level, high) -> IRQ 10
[60197.598403] virtio-pci 0000:00:05.0: setting latency timer to 64
[60197.605214] virtio-pci 0000:00:05.0: irq 42 for MSI/MSI-X
[60197.605395] virtio-pci 0000:00:05.0: irq 43 for MSI/MSI-X
[60197.658638]  vdb: unknown partition table
# dd if=/dev/vdb bs=1 count=14
Avant snapshot14+0 records in
14+0 records out
No Comments on OpenStack : Exemple d’utilisation pour présenter Cinder

OpenStack : Exemple d’utilisation pour présenter Neutron

Written by admin on  Categories: OpenStack Tags: , , , , , ,

Neutron (aussi appelé Quantum) est la stack de gestion du réseau sur OpenStack. C’est l’implémentation du SDN (Software Defined Network) dans le projet open source d’IaaS.
Son rôle est donc de communiquer avec différentes briques réseaux pour les assembler et les rendre pilotable de manière logicielle. Pour plus de détails, voir directement le projet sur openstack.org.

Contexte de l’exemple

Dans le cadre d’un POC, nous allons essentiellement nous servir de machines virtuelles x86. Nous n’allons pas utiliser les fonctionnalité de pilotage de materiel Cisco, Juniper ou autre.

  +-------------------------+           +--------------------------+
  |                         |           |                          |
  |                         |           |                          |
  |     controller01        |           |       compute01          |
  |                         |           |                          |
  |                         |           |                          |
  |  eth0     eth2    eth3  |           |  eth3    eth2      eth0  |
  +-------------------------+           +--------------------------+
      |        |       |                    |       |         |
      |        |       |                    |       |         |
      |        |       |   Réseau "privé"   |       |         |
      |        |       +--------------------+       |         |
      |        |           Réseau "public"          |         |
      |        +------------------------------------+         |
      |                    Réseau "admin"                     |
      +-------------------------------------------------------+

L’exemple que nous allons voir sera très simple, il consistera à instancier une VM sur un réseau privé, lui permettre de sortir et permettre également à l’exterieur de joindre la VM.

       +-+                                +-+
       | |                                | |
       | |                                | |
       |V|                                |V|
       |L|                                |L|
       |A|                                |A|
       |N|                                |N|
       | |            +-------+           | |         +------------+
       | |            |       |           | |         |            |
       |P|------------|router1|-----------|P|---------| instance1  |
       |U|            |       |           |R|         |            |
       |B|            +-------+           |I|         +------------+
       |L|                                |V|
       |I|                                |E|
       |C|                                | |
       | |                                | |
       +-+                                +-+

Pour faciliter le déroulement, les opérations seront effectuées avec le compte admin même si certaines pourraient être faite en tant que simple user du tenant “admin”.

Prérequis

Comme les méthodes de déploiement d’OpenStack varient beaucoup, nous n’allons pas détailler ici comment mettre en place le service. Nous partons du principe que le service est “up and running”. Nous avons donc un service qui fonctionne avec les stack suivantes : keystone, glance, nova, horizon et bien sur neutron.
Nous avons également les droits d’administration du service chargés dans l’environnement de la ligne de commande.

[root@hostnamedab ~(keystone_admin)]# keystone service-list
+----------------------------------+------------+----------------+----------------------------+
|                id                |    name    |      type      |        description         |
+----------------------------------+------------+----------------+----------------------------+
| b0bee0b0e9f34f8bafd4ba7d54ba3d6e | ceilometer |    metering    | Openstack Metering Service |
| 2a06e498c2b84cb48ebd578f6fa48297 |   cinder   |     volume     |       Cinder Service       |
| 14fa9ec07e34443bba5daac33266671f | cinder_v2  |    volumev2    |     Cinder Service v2      |
| 1f4e441ee6d5489281d3aa8d64e2a746 |   glance   |     image      |  Openstack Image Service   |
| d189a66300e04e9b8ac8cacad3eca3a1 |    heat    | orchestration  |          Heat API          |
| f96774576d8846d7bdd04ec9ccefabb5 |  heat-cfn  | cloudformation |  Heat CloudFormation API   |
| 9365681a0e3945e2806e83d85b8319bf |  keystone  |    identity    | OpenStack Identity Service |
| f13396b4b11c45baa59f9de685f25020 |  neutron   |    network     | Neutron Networking Service |
| 6cf6626654b04b89a988483fb566508d |    nova    |    compute     | Openstack Compute Service  |
| f783eff435804e449d529ef6d03745bf |  nova_ec2  |      ec2       |        EC2 Service         |
+----------------------------------+------------+----------------+----------------------------+
[root@hostnamedab ~(keystone_admin)]# nova service-list
+------------------+-------------+----------+---------+-------+----------------------------+-----------------+
| Binary           | Host        | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+------------------+-------------+----------+---------+-------+----------------------------+-----------------+
| nova-consoleauth | hostnamedab | internal | enabled | up    | 2014-02-26T14:29:25.000000 | None            |
| nova-scheduler   | hostnamedab | internal | enabled | up    | 2014-02-26T14:29:25.000000 | None            |
| nova-conductor   | hostnamedab | internal | enabled | up    | 2014-02-26T14:29:24.000000 | None            |
| nova-cert        | hostnamedab | internal | enabled | up    | 2014-02-26T14:29:25.000000 | None            |
| nova-compute     | hostnamedbj | nova     | enabled | up    | 2014-02-26T14:29:28.000000 | None            |
| nova-console     | hostnamedab | internal | enabled | down  | 2014-02-26T09:30:20.000000 | None            |
+------------------+-------------+----------+---------+-------+----------------------------+-----------------+

Composition de la stack Neutron

Neutron a une architecture modulaire basé sur des agents qui auront la délégation d’une partie du service.

  • neutron-server : gestion générale du service
  • neutron-metadata-agent : permet de communiquer avec les éléments d’OpenStack
  • neutron-rootwrap : gestion de l’élévation des privilèges
  • neutron-openvswitch-agent : prend en charge tout le pilotage de Open vSwitch
  • neutron-lbaas-agent : prend en charge la gestion des Load Ballancer
  • neutron-dhcp-agent : prend en charge la gestion des serveurs DHCP
  • neutron-l3-agent : pilote toute la partie N3 (OSI), notamment l’implémentation des routeurs au travers des namespaces

En fonction du rôle des noeuds, certaines parties devront être présentes ou non suivant les cas.

Configuration

Ce n’est pas le lieu ici pour discuter de la configuration, mais voici tout de même un extrait des éléments les plus importants pour permettre de resituer le contexte.

[root@hostnamedab ~(keystone_admin)]# cat /etc/neutron/neutron.conf | grep -v "^#" |grep -v "^$"
[DEFAULT]
debug = False
verbose = True
use_syslog = False
log_dir =/var/log/neutron
bind_host = 0.0.0.0
bind_port = 9696
core_plugin =neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
service_plugins =neutron.services.loadbalancer.plugin.LoadBalancerPlugin
auth_strategy = keystone
base_mac = fa:16:3e:00:00:00
mac_generation_retries = 16
dhcp_lease_duration = 120
allow_bulk = True
allow_overlapping_ips = True
rpc_backend = neutron.openstack.common.rpc.impl_qpid
control_exchange = neutron
qpid_hostname = 192.168.41.129
qpid_port = 5672
qpid_username = guest
qpid_password = guest
qpid_heartbeat = 60
qpid_protocol = tcp
qpid_tcp_nodelay = True
dhcp_agents_per_network = 1
api_workers = 0
qpid_reconnect_limit=0
qpid_reconnect_interval_max=0
qpid_reconnect_timeout=0
qpid_reconnect=True
qpid_reconnect_interval_min=0
qpid_reconnect_interval=0
[quotas]
[agent]
[keystone_authtoken]
auth_host = 192.168.41.129
auth_port = 35357
auth_protocol = http
admin_tenant_name = services
admin_user = neutron
admin_password = patapouf
auth_uri=http://192.168.41.129:5000/
[database]
connection = mysql://neutron:patapouf@192.168.41.129/ovs_neutron
max_retries = 10
retry_interval = 10
idle_timeout = 3600
[service_providers]
[AGENT]
root_helper=sudo neutron-rootwrap /etc/neutron/rootwrap.conf
[root@hostnamedab neutron(keystone_admin)]# cat /etc/neutron/l3_agent.ini | grep -v "^#" |grep -v "^$"
[DEFAULT]
debug = False
interface_driver =neutron.agent.linux.interface.OVSInterfaceDriver
use_namespaces = True
handle_internal_only_routers = True
external_network_bridge = br-ex
metadata_port = 9697
send_arp_for_ha = 3
periodic_interval = 40
periodic_fuzzy_delay = 5
enable_metadata_proxy = True
[root@hostnamedab neutron(keystone_admin)]# cat /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini | grep -v "^#" |grep -v "^$"
[ovs]
[agent]
[securitygroup]
[OVS]
tunnel_id_ranges=1:1000
tenant_network_type=gre
local_ip=192.168.44.129
enable_tunneling=True
integration_bridge=br-int
tunnel_bridge=br-tun
[AGENT]
polling_interval=2
[SECURITYGROUP]
firewall_driver=neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[root@hostnamedab neutron(keystone_admin)]# cat /etc/nova/nova.conf | grep -v "^#" |grep -v "^$" | grep neutron
service_neutron_metadata_proxy=True
neutron_metadata_proxy_shared_secret=patapouf
neutron_default_tenant_id=default
network_api_class=nova.network.neutronv2.api.API
neutron_url=http://192.168.41.129:9696
neutron_url_timeout=30
neutron_admin_username=neutron
neutron_admin_password=patapouf
neutron_admin_tenant_name=services
neutron_region_name=RegionOne
neutron_admin_auth_url=http://192.168.41.129:35357/v2.0
neutron_auth_strategy=keystone
neutron_ovs_bridge=br-int
neutron_extension_sync_interval=600
security_group_api=neutron

Création d’un réseau public et raccordement d’une instance

Récupération de tenant-id admin.

[root@hostnamedab ~(keystone_admin)]# keystone tenant-list | grep admin | awk '{print $2;}'
5f8ffb039ce844bc94ba031be85e0936

Création du réseau et du sous-réseau mytenantnet.

[root@hostnamedab ~(keystone_admin)]# neutron net-create --tenant-id 5f8ffb039ce844bc94ba031be85e0936 mytenantnet
Created a new network:
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| id                        | 00bcfcc4-236e-40bd-ba54-74c85ae0d05e |
| name                      | mytenantnet                          |
| provider:network_type     | gre                                  |
| provider:physical_network |                                      |
| provider:segmentation_id  | 1                                    |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   |                                      |
| tenant_id                 | 5f8ffb039ce844bc94ba031be85e0936     |
+---------------------------+--------------------------------------+
[root@hostnamedab ~(keystone_admin)]# neutron subnet-create --tenant-id 5f8ffb039ce844bc94ba031be85e0936 mytenantnet 192.168.165.0/24 --gateway 192.168.165.1
Created a new subnet:
+------------------+------------------------------------------------------+
| Field            | Value                                                |
+------------------+------------------------------------------------------+
| allocation_pools | {"start": "192.168.165.2", "end": "192.168.165.254"} |
| cidr             | 192.168.165.0/24                                     |
| dns_nameservers  |                                                      |
| enable_dhcp      | True                                                 |
| gateway_ip       | 192.168.165.1                                        |
| host_routes      |                                                      |
| id               | efab7729-96ca-4b04-9ab7-3fd6d7c1d22b                 |
| ip_version       | 4                                                    |
| name             |                                                      |
| network_id       | 00bcfcc4-236e-40bd-ba54-74c85ae0d05e                 |
| tenant_id        | 5f8ffb039ce844bc94ba031be85e0936                     |
+------------------+------------------------------------------------------+

Boot d’une instance avec un raccordement réseau sur mytenantnet.

[root@hostnamedab ~(keystone_admin)]# nova boot --flavor m1.small --image cirros --security-groups allowall --nic net-id=00bcfcc4-236e-40bd-ba54-74c85ae0d05e instance1
+--------------------------------------+--------------------------------------+
| Property                             | Value                                |
+--------------------------------------+--------------------------------------+
| OS-EXT-STS:task_state                | scheduling                           |
| image                                | cirros                               |
| OS-EXT-STS:vm_state                  | building                             |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000003                    |
| OS-SRV-USG:launched_at               | None                                 |
| flavor                               | m1.small                             |
| id                                   | efcb16e5-b815-4b6d-af4f-930e3830036e |
| security_groups                      | [{u'name': u'allowall'}]             |
| user_id                              | ab1435cbeb5d46829299525fc4b37c7d     |
| OS-DCF:diskConfig                    | MANUAL                               |
| accessIPv4                           |                                      |
| accessIPv6                           |                                      |
| progress                             | 0                                    |
| OS-EXT-STS:power_state               | 0                                    |
| OS-EXT-AZ:availability_zone          | nova                                 |
| config_drive                         |                                      |
| status                               | BUILD                                |
| updated                              | 2014-02-26T16:10:38Z                 |
| hostId                               |                                      |
| OS-EXT-SRV-ATTR:host                 | None                                 |
| OS-SRV-USG:terminated_at             | None                                 |
| key_name                             | None                                 |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | None                                 |
| name                                 | instance1                            |
| adminPass                            | rKkadB45VXRV                         |
| tenant_id                            | 5f8ffb039ce844bc94ba031be85e0936     |
| created                              | 2014-02-26T16:10:37Z                 |
| os-extended-volumes:volumes_attached | []                                   |
| metadata                             | {}                                   |
+--------------------------------------+--------------------------------------+
[root@hostnamedab ~(keystone_admin)]# nova show instance1
+--------------------------------------+----------------------------------------------------------+
| Property                             | Value                                                    |
+--------------------------------------+----------------------------------------------------------+
| status                               | ACTIVE                                                   |
| updated                              | 2014-02-26T16:11:17Z                                     |
| OS-EXT-STS:task_state                | None                                                     |
| OS-EXT-SRV-ATTR:host                 | hostnamedbj                                              |
| key_name                             | None                                                     |
| image                                | cirros (3257ad97-ac1e-4059-afb1-ad0de2aa01b1)            |
| hostId                               | 67a93b4953c7cf7ac992a4c27f8551f70aa7e113df364523a225460f |
| OS-EXT-STS:vm_state                  | active                                                   |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000003                                        |
| OS-SRV-USG:launched_at               | 2014-02-26T16:11:17.000000                               |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | hostnamedbj.dsit.sncf.fr                                 |
| flavor                               | m1.small (2)                                             |
| id                                   | efcb16e5-b815-4b6d-af4f-930e3830036e                     |
| security_groups                      | [{u'name': u'allowall'}]                                 |
| OS-SRV-USG:terminated_at             | None                                                     |
| user_id                              | ab1435cbeb5d46829299525fc4b37c7d                         |
| name                                 | instance1                                                |
| created                              | 2014-02-26T16:10:37Z                                     |
| mytenantnet network                  | 192.168.165.2                                            |
| tenant_id                            | 5f8ffb039ce844bc94ba031be85e0936                         |
| OS-DCF:diskConfig                    | MANUAL                                                   |
| metadata                             | {}                                                       |
| os-extended-volumes:volumes_attached | []                                                       |
| accessIPv4                           |                                                          |
| accessIPv6                           |                                                          |
| progress                             | 0                                                        |
| OS-EXT-STS:power_state               | 1                                                        |
| OS-EXT-AZ:availability_zone          | nova                                                     |
| config_drive                         |                                                          |
+--------------------------------------+----------------------------------------------------------+

Création du réseau public, du routeur et raccordement

Déclaration du réseau public.

[root@hostnamedab ~(keystone_admin)]# neutron net-create public -- --router:external=True
Created a new network:
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| id                        | 8cce6638-d41f-4b58-8549-2a10f3bf7b06 |
| name                      | public                               |
| provider:network_type     | gre                                  |
| provider:physical_network |                                      |
| provider:segmentation_id  | 2                                    |
| router:external           | True                                 |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   |                                      |
| tenant_id                 | 5f8ffb039ce844bc94ba031be85e0936     |
+---------------------------+--------------------------------------+
[root@hostnamedab ~(keystone_admin)]# neutron subnet-create public --allocation-pool start=10.6.27.150,end=10.6.27.249 --gateway 10.6.27.1 --enable_dhcp=False 10.6.27.0/24
Created a new subnet:
+------------------+------------------------------------------------+
| Field            | Value                                          |
+------------------+------------------------------------------------+
| allocation_pools | {"start": "10.6.27.150", "end": "10.6.27.249"} |
| cidr             | 10.6.27.0/24                                   |
| dns_nameservers  |                                                |
| enable_dhcp      | False                                          |
| gateway_ip       | 10.6.27.1                                      |
| host_routes      |                                                |
| id               | 67ddd6df-b592-4d9e-9906-34e93563eb2c           |
| ip_version       | 4                                              |
| name             |                                                |
| network_id       | 8cce6638-d41f-4b58-8549-2a10f3bf7b06           |
| tenant_id        | 5f8ffb039ce844bc94ba031be85e0936               |
+------------------+------------------------------------------------+

Creation du router et raccordement du le réseau public et le réseau du tenant.

[root@hostnamedab ~(keystone_admin)]# neutron router-create router1 --tenant-id 5f8ffb039ce844bc94ba031be85e0936
Created a new router:
+-----------------------+--------------------------------------+
| Field                 | Value                                |
+-----------------------+--------------------------------------+
| admin_state_up        | True                                 |
| external_gateway_info |                                      |
| id                    | 7c93ac79-fa36-490d-a57e-bd1768b1550f |
| name                  | router1                              |
| status                | ACTIVE                               |
| tenant_id             | 5f8ffb039ce844bc94ba031be85e0936     |
+-----------------------+--------------------------------------+
[root@hostnamedab ~(keystone_admin)]# neutron router-gateway-set router1 public
Set gateway for router router1
[root@hostnamedab ~(keystone_admin)]# neutron router-interface-add router1 efab7729-96ca-4b04-9ab7-3fd6d7c1d22b
Added interface 7c158f46-923c-44cd-841b-06b9009c32e4 to router router1.

Activer l’IP publique de l’instance

Création d’une floating IP et association au port de l’instance.

[root@hostnamedab ~(keystone_admin)]# neutron floatingip-create public
Created a new floatingip:
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| fixed_ip_address    |                                      |
| floating_ip_address | 10.6.27.151                          |
| floating_network_id | 8cce6638-d41f-4b58-8549-2a10f3bf7b06 |
| id                  | b649f35e-866f-4333-a048-b981be798c35 |
| port_id             |                                      |
| router_id           |                                      |
| tenant_id           | 5f8ffb039ce844bc94ba031be85e0936     |
+---------------------+--------------------------------------+
[root@hostnamedab ~(keystone_admin)]# neutron port-list
+--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+
| id                                   | name | mac_address       | fixed_ips                                                                            |
+--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+
| 0170ce1a-aa3a-4ef3-981d-e9fb8b3c4924 |      | fa:16:3e:7f:71:b5 | {"subnet_id": "efab7729-96ca-4b04-9ab7-3fd6d7c1d22b", "ip_address": "192.168.165.2"} |
| 2d96e1f5-e03c-4f96-98cb-79f386602859 |      | fa:16:3e:16:21:59 | {"subnet_id": "efab7729-96ca-4b04-9ab7-3fd6d7c1d22b", "ip_address": "192.168.165.3"} |
| 49c50a0f-f3c2-43ae-89a7-53b9acc82cf5 |      | fa:16:3e:6c:a7:22 | {"subnet_id": "67ddd6df-b592-4d9e-9906-34e93563eb2c", "ip_address": "10.6.27.150"}   |
| 5d4eb754-b3da-4df0-9c96-dd2e529ce839 |      | fa:16:3e:71:56:e1 | {"subnet_id": "67ddd6df-b592-4d9e-9906-34e93563eb2c", "ip_address": "10.6.27.151"}   |
| 7c158f46-923c-44cd-841b-06b9009c32e4 |      | fa:16:3e:51:d3:45 | {"subnet_id": "efab7729-96ca-4b04-9ab7-3fd6d7c1d22b", "ip_address": "192.168.165.1"} |
+--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+
[root@hostnamedab ~(keystone_admin)]# neutron floatingip-associate b649f35e-866f-4333-a048-b981be798c35 0170ce1a-aa3a-4ef3-981d-e9fb8b3c4924
Associated floatingip b649f35e-866f-4333-a048-b981be798c35

Test

On lance simplement une connexion SSH sur l’instance pour vérifier qu’elle est joignable.

pjbt05841@hostnamedug:~$ ssh cirros@10.6.27.151
Warning: Permanently added '10.6.27.151' (RSA) to the list of known hosts.
cirros@10.6.27.151's password:
$ ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
    link/ether fa:16:3e:7f:71:b5 brd ff:ff:ff:ff:ff:ff
    inet 192.168.165.2/24 brd 192.168.165.255 scope global eth0
    inet6 fe80::f816:3eff:fe7f:71b5/64 scope link
       valid_lft forever preferred_lft forever
No Comments on OpenStack : Exemple d’utilisation pour présenter Neutron

Reboot avec kexec

Written by admin on April 2, 2014 Categories: Ligne de commande, Linux Tags: ,

kexec -l /boot/vmlinuz-2.6.18-194.11.4.el5 --initrd=/boot/initrd-2.6.18-194.11.4.el5.img --reuse-cmdline
kexec -e

Variante :

kexec -l /boot/vmlinuz-2.6.18-194.11.4.el5 --initrd=/boot/initrd-2.6.18-194.11.4.el5.img --command-line="$( cat /proc/cmdline )"
kexec -e

Moins bourrin, on va sutdown avec les umount qui vont bien :

kexec -l /boot/vmlinuz-2.6.18-194.11.4.el5 --initrd=/boot/initrd-2.6.18-194.11.4.el5.img --reuse-cmdline
reboot

Source :

https://wiki.archlinux.org/index.php/kexec
http://fedoraproject.org/wiki/Kernel/kexec
http://www.linux.com/community/blogs/129-servers/413862

No Comments on Reboot avec kexec

[TMUX] Ctrl + Arrow pour naviguer entre les windows avec putty

Written by admin on March 11, 2014 Categories: screen/tmux Tags: , , , ,

J’ai bien galéré… pourtant il suffisait de lire la FAQ de tmux…

Putty brouille les pistes, cette ligne a insérer dans la conf permet de récrire les signaux des touches des flèches quand ont les utilise avec Ctrl.

set -g terminal-overrides "xterm*:kLFT5=\eOD:kRIT5=\eOC:kUP5=\eOA:kDN5=\eOB:smkx@:rmkx@"

Source :

http://sourceforge.net/p/tmux/tmux-code/ci/master/tree/FAQ

No Comments on [TMUX] Ctrl + Arrow pour naviguer entre les windows avec putty

Execute une commande à chaque sauvegarde

Written by admin on March 5, 2014 Categories: vim Tags: , ,
:autocmd BufWritePost * !cat <afile> | nc remotehost 4242

Beaucoup d’autres possibilité existent.

Sources :

http://stackoverflow.com/questions/4627701/vim-how-to-execute-automatically-execute-a-shell-command-after-saving-a-file

http://stackoverflow.com/questions/601039/vim-save-and-run-at-the-same-time

No Comments on Execute une commande à chaque sauvegarde

Configuration du dashboard et de VNC par le réseau public

Written by admin on February 26, 2014 Categories: OpenStack Tags: , ,

Dans /etc/openstack-dashboard/local_settings, la ligne ALLOWED_HOSTS doit contenir le host appelé dans l’url.

ALLOWED_HOSTS = [‘192.168.41.129’, ‘localhost’, ‘10.6.27.129’ ]

Dans /etc/nova/nova.conf, il faut mettre l’IP publique du controlleur

novncproxy_base_url=http://10.6.27.129:6080/vnc_auto.html

Source :

http://docs.openstack.org/developer/nova/runnova/vncconsole.html

No Comments on Configuration du dashboard et de VNC par le réseau public