OpenStack : Exemple d’utilisation pour présenter Cinder

Written by admin on April 23, 2014 Categories: OpenStack Tags: , , , , , ,

Cinder est la stack de gestion du stockage en mode bloc sur OpenStack. C’est l’implémentation du SDS (Software Defined Storage) dans le projet open source d’IaaS.
Son rôle est donc de communiquer avec différentes briques de stockage pour les assembler et les rendre pilotable de manière logicielle. Pour plus de détails, voir directement le projet sur openstack.org.

Contexte de l’exemple

Dans le cadre d’un POC, nous allons essentiellement nous servir de machines virtuelles x86. Nous n’allons pas utiliser les fonctionnalité de pilotage de materiel baie EMC, Netapp ou autre.

  +-------------------------+           +--------------------------+
  |                         |           |                          |
  |                         |           |                          |
  |     controller01        |           |       compute01          |
  |                         |           |                          |
  |                         |           |                          |
  |  eth0     eth2    eth3  |           |  eth3    eth2      eth0  |
  +-------------------------+           +--------------------------+
      |        |       |                    |       |         |
      |        |       |                    |       |         |
      |        |       |   Réseau "privé"   |       |         |
      |        |       +--------------------+       |         |
      |        |           Réseau "public"          |         |
      |        +------------------------------------+         |
      |                    Réseau "admin"                     |
      +-------------------------------------------------------+

L’exemple que nous allons voir sera très simple, il consistera à raccorder un volume à une instance et à générer un snapshot.

Pour faciliter le déroulement, les opérations seront effectuées avec le compte admin même si certaines pourraient être faite en tant que simple user du tenant “admin”.

Prérequis

Comme les méthodes de déploiement d’OpenStack varient beaucoup, nous n’allons pas détailler ici comment mettre en place le service. Nous partons du principe que le service est “up and running”. Nous avons donc un service qui fonctionne avec les stack suivantes : keystone, glance, nova, horizon, cinder et neutron.
Nous avons également les droits d’administration du service chargés dans l’environnement de la ligne de commande.

[root@hostnamedab ~(keystone_admin)]# keystone service-list
+----------------------------------+------------+----------------+----------------------------+
|                id                |    name    |      type      |        description         |
+----------------------------------+------------+----------------+----------------------------+
| b0bee0b0e9f34f8bafd4ba7d54ba3d6e | ceilometer |    metering    | Openstack Metering Service |
| 2a06e498c2b84cb48ebd578f6fa48297 |   cinder   |     volume     |       Cinder Service       |
| 14fa9ec07e34443bba5daac33266671f | cinder_v2  |    volumev2    |     Cinder Service v2      |
| 1f4e441ee6d5489281d3aa8d64e2a746 |   glance   |     image      |  Openstack Image Service   |
| d189a66300e04e9b8ac8cacad3eca3a1 |    heat    | orchestration  |          Heat API          |
| f96774576d8846d7bdd04ec9ccefabb5 |  heat-cfn  | cloudformation |  Heat CloudFormation API   |
| 9365681a0e3945e2806e83d85b8319bf |  keystone  |    identity    | OpenStack Identity Service |
| f13396b4b11c45baa59f9de685f25020 |  neutron   |    network     | Neutron Networking Service |
| 6cf6626654b04b89a988483fb566508d |    nova    |    compute     | Openstack Compute Service  |
| f783eff435804e449d529ef6d03745bf |  nova_ec2  |      ec2       |        EC2 Service         |
+----------------------------------+------------+----------------+----------------------------+
[root@hostnamedab ~(keystone_admin)]# nova service-list
+------------------+-------------+----------+---------+-------+----------------------------+-----------------+
| Binary           | Host        | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+------------------+-------------+----------+---------+-------+----------------------------+-----------------+
| nova-consoleauth | hostnamedab | internal | enabled | up    | 2014-02-26T14:29:25.000000 | None            |
| nova-scheduler   | hostnamedab | internal | enabled | up    | 2014-02-26T14:29:25.000000 | None            |
| nova-conductor   | hostnamedab | internal | enabled | up    | 2014-02-26T14:29:24.000000 | None            |
| nova-cert        | hostnamedab | internal | enabled | up    | 2014-02-26T14:29:25.000000 | None            |
| nova-compute     | hostnamedbj | nova     | enabled | up    | 2014-02-26T14:29:28.000000 | None            |
| nova-console     | hostnamedab | internal | enabled | down  | 2014-02-26T09:30:20.000000 | None            |
+------------------+-------------+----------+---------+-------+----------------------------+-----------------+

Composition de la stack Cinder

Cinder est composé de plusieurs parties :

  •   cinder-api est en charge de présenter une interface RESTFUL
  •   cinder-scheduler permet de répartir les requêtes suivant certaines règles
  •   cinder-volumes est le service qui pilote les block devices, il est donc en charge du raccordement avec les différents backends
  •   cinder-backup fournis un service de backup qui permet de sauvegarder un volume dans un backend adéquat, on utilisera souvent swift

Configuration

Ce n’est pas le lieu ici pour discuter de la configuration, mais voici tout de même un extrait des éléments les plus importants pour permettre de resituer le contexte.

[root@hostnamedab ~(keystone_admin)]# cat /etc/cinder/cinder.conf | grep -v "^#" |grep -v "^$"
[DEFAULT]
osapi_volume_listen=0.0.0.0
api_paste_config=/etc/cinder/api-paste.ini
glance_host=192.168.41.129
auth_strategy=keystone
debug=False
verbose=False
use_syslog=False
rpc_backend=cinder.openstack.common.rpc.impl_qpid
control_exchange=cinder
qpid_hostname=192.168.41.129
qpid_port=5672
qpid_username=guest
qpid_password=guest
qpid_heartbeat=60
qpid_protocol=tcp
qpid_tcp_nodelay=True
iscsi_ip_address=192.168.41.129
iscsi_helper=tgtadm
volume_group=cinder-volumes
sql_connection=mysql://cinder:patapouf@192.168.41.129/cinder
qpid_reconnect_timeout=0
[root@hostnamedab ~(keystone_admin)]# cat /etc/tgt/targets.conf | grep -v "^#" |grep -v "^$"
include /etc/cinder/volumes/*
default-driver iscsi
qpid_reconnect_limit=0
qpid_reconnect=True
qpid_reconnect_interval_max=0
qpid_reconnect_interval_min=0
sql_idle_timeout=3600
qpid_reconnect_interval=0
notification_driver=cinder.openstack.common.notifier.rpc_notifier

Exemple 1 : Raccordement d’un volume à une instance

Création d’un volume de 1G.

[root@hostnamedab ~(keystone_admin)]# cinder create --display_name my_disk 1
+---------------------+--------------------------------------+
|       Property      |                Value                 |
+---------------------+--------------------------------------+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2014-03-24T16:31:31.006458      |
| display_description |                 None                 |
|     display_name    |               my_disk                |
|          id         | a3ac8866-2af5-4372-914e-e0546f8212d6 |
|       metadata      |                  {}                  |
|         size        |                  1                   |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |                 None                 |
+---------------------+--------------------------------------+
[root@hostnamedab ~(keystone_admin)]# cinder show my_disk
+--------------------------------+--------------------------------------+
|            Property            |                Value                 |
+--------------------------------+--------------------------------------+
|          attachments           |                  []                  |
|       availability_zone        |                 nova                 |
|            bootable            |                false                 |
|           created_at           |      2014-03-24T16:31:31.000000      |
|      display_description       |                 None                 |
|          display_name          |               my_disk                |
|               id               | a3ac8866-2af5-4372-914e-e0546f8212d6 |
|            metadata            |                  {}                  |
|     os-vol-host-attr:host      |             hostnamedab              |
| os-vol-mig-status-attr:migstat |                 None                 |
| os-vol-mig-status-attr:name_id |                 None                 |
|  os-vol-tenant-attr:tenant_id  |   5f8ffb039ce844bc94ba031be85e0936   |
|              size              |                  1                   |
|          snapshot_id           |                 None                 |
|          source_volid          |                 None                 |
|             status             |              available               |
|          volume_type           |                 None                 |
+--------------------------------+--------------------------------------+

On liste les instance pour trouver ou raccorder notre volume

[root@hostnamedab ~(keystone_admin)]# nova list
+--------------------------------------+-----------+--------+------------+-------------+---------------------------+
| ID                                   | Name      | Status | Task State | Power State | Networks                  |
+--------------------------------------+-----------+--------+------------+-------------+---------------------------+
| d49d6328-289d-4a17-a3d6-7847dfe3fdec | instance2 | ACTIVE | None       | Running     | mynettenant=192.168.165.2 |
+--------------------------------------+-----------+--------+------------+-------------+---------------------------+

On attache le volume à l’instance “instance2”.

[root@hostnamedab ~(keystone_admin)]# nova volume-attach instance2 a3ac8866-2af5-4372-914e-e0546f8212d6 auto
+----------+--------------------------------------+
| Property | Value                                |
+----------+--------------------------------------+
| device   | /dev/vdb                             |
| serverId | d49d6328-289d-4a17-a3d6-7847dfe3fdec |
| id       | a3ac8866-2af5-4372-914e-e0546f8212d6 |
| volumeId | a3ac8866-2af5-4372-914e-e0546f8212d6 |
+----------+--------------------------------------+

On vérifie que le volume soit bien rattaché à l’instance.

[root@hostnamedab ~(keystone_admin)]# nova show instance2
+--------------------------------------+----------------------------------------------------------+
| Property                             | Value                                                    |
+--------------------------------------+----------------------------------------------------------+
| status                               | ACTIVE                                                   |
| updated                              | 2014-03-24T16:08:58Z                                     |
| OS-EXT-STS:task_state                | None                                                     |
| OS-EXT-SRV-ATTR:host                 | hostnamedbj                                              |
| key_name                             | None                                                     |
| image                                | cirros-3.2 (38de0608-74fd-47c3-8839-e0d839711171)        |
| hostId                               | 67a93b4953c7cf7ac992a4c27f8551f70aa7e113df364523a225460f |
| OS-EXT-STS:vm_state                  | active                                                   |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000010                                        |
| OS-SRV-USG:launched_at               | 2014-03-24T16:08:58.000000                               |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | hostnamedbj.dsit.sncf.fr                                 |
| flavor                               | m1.small (2)                                             |
| id                                   | d49d6328-289d-4a17-a3d6-7847dfe3fdec                     |
| security_groups                      | [{u'name': u'allowall'}]                                 |
| OS-SRV-USG:terminated_at             | None                                                     |
| user_id                              | ab1435cbeb5d46829299525fc4b37c7d                         |
| name                                 | instance2                                                |
| created                              | 2014-03-24T16:07:51Z                                     |
| mynettenant network                  | 192.168.165.2                                            |
| tenant_id                            | 5f8ffb039ce844bc94ba031be85e0936                         |
| OS-DCF:diskConfig                    | MANUAL                                                   |
| metadata                             | {}                                                       |
| os-extended-volumes:volumes_attached | [{u'id': u'a3ac8866-2af5-4372-914e-e0546f8212d6'}]       |
| accessIPv4                           |                                                          |
| accessIPv6                           |                                                          |
| progress                             | 0                                                        |
| OS-EXT-STS:power_state               | 1                                                        |
| OS-EXT-AZ:availability_zone          | nova                                                     |
| config_drive                         |                                                          |
+--------------------------------------+----------------------------------------------------------+

Petite vérification sur l’instance.

# dmesg
...
[ 1582.110496] pci 0000:00:06.0: [1af4:1001] type 0 class 0x000100
[ 1582.110920] pci 0000:00:06.0: reg 10: [io  0x0000-0x003f]
[ 1582.110999] pci 0000:00:06.0: reg 14: [mem 0x00000000-0x00000fff]
[ 1582.121229] pci 0000:00:06.0: BAR 1: assigned [mem 0x80000000-0x80000fff]
[ 1582.121591] pci 0000:00:06.0: BAR 1: set to [mem 0x80000000-0x80000fff] (PCI address [0x80000000-0x80000fff])
[ 1582.121708] pci 0000:00:06.0: BAR 0: assigned [io  0x1000-0x103f]
[ 1582.121761] pci 0000:00:06.0: BAR 0: set to [io  0x1000-0x103f] (PCI address [0x1000-0x103f])
[ 1582.123504] pci 0000:00:00.0: no hotplug settings from platform
[ 1582.124831] pci 0000:00:00.0: using default PCI settings
[ 1582.125182] pci 0000:00:01.0: no hotplug settings from platform
[ 1582.125716] pci 0000:00:01.0: using default PCI settings
[ 1582.125798] ata_piix 0000:00:01.1: no hotplug settings from platform
[ 1582.126316] ata_piix 0000:00:01.1: using default PCI settings
[ 1582.126454] uhci_hcd 0000:00:01.2: no hotplug settings from platform
[ 1582.126941] uhci_hcd 0000:00:01.2: using default PCI settings
[ 1582.127020] pci 0000:00:01.3: no hotplug settings from platform
[ 1582.127587] pci 0000:00:01.3: using default PCI settings
[ 1582.127671] pci 0000:00:02.0: no hotplug settings from platform
[ 1582.128260] pci 0000:00:02.0: using default PCI settings
[ 1582.128697] virtio-pci 0000:00:03.0: no hotplug settings from platform
[ 1582.129373] virtio-pci 0000:00:03.0: using default PCI settings
[ 1582.129464] virtio-pci 0000:00:04.0: no hotplug settings from platform
[ 1582.129973] virtio-pci 0000:00:04.0: using default PCI settings
[ 1582.130110] virtio-pci 0000:00:05.0: no hotplug settings from platform
[ 1582.130658] virtio-pci 0000:00:05.0: using default PCI settings
[ 1582.130742] pci 0000:00:06.0: no hotplug settings from platform
[ 1582.131172] pci 0000:00:06.0: using default PCI settings
[ 1582.161881] virtio-pci 0000:00:06.0: enabling device (0000 -> 0003)
[ 1582.166882] ACPI: PCI Interrupt Link [LNKB] enabled at IRQ 11
[ 1582.167771] virtio-pci 0000:00:06.0: PCI INT A -> Link[LNKB] -> GSI 11 (level, high) -> IRQ 11
[ 1582.167945] virtio-pci 0000:00:06.0: setting latency timer to 64
[ 1582.171272] virtio-pci 0000:00:06.0: irq 45 for MSI/MSI-X
[ 1582.171440] virtio-pci 0000:00:06.0: irq 46 for MSI/MSI-X
[ 1582.209451]  vdb: unknown partition table

Exemple 2 : Utilisation d’un snapshot

Sur le disque raccordé plus haut, nous allons mettre une marque avant le snapshot et une après et vérifier le rollback.

# echo "Avant snapshot" > /dev/vdb
# dd if=/dev/vdb bs=1 count=14
Avant snapshot14+0 records in
14+0 records out

Maintenant on prend le snapshot depuis cinder. L’option –force=True est présente pour obliger cinder à faire le snapshot alors que le volume est toujours attaché.

[root@hostnamedab ~(keystone_admin)]# cinder snapshot-create --force True --display_name my_snap a3ac8866-2af5-4372-914e-e0546f8212d6
+---------------------+--------------------------------------+
|       Property      |                Value                 |
+---------------------+--------------------------------------+
|      created_at     |      2014-03-24T17:00:25.619336      |
| display_description |                 None                 |
|     display_name    |               my_snap                |
|          id         | 074dedff-4e7d-45d4-ad7e-ff529886dbf3 |
|       metadata      |                  {}                  |
|         size        |                  1                   |
|        status       |               creating               |
|      volume_id      | a3ac8866-2af5-4372-914e-e0546f8212d6 |
+---------------------+--------------------------------------+
[root@hostnamedab ~(keystone_admin)]# cinder snapshot-list
+--------------------------------------+--------------------------------------+-----------+--------------+------+
|                  ID                  |              Volume ID               |   Status  | Display Name | Size |
+--------------------------------------+--------------------------------------+-----------+--------------+------+
| 074dedff-4e7d-45d4-ad7e-ff529886dbf3 | a3ac8866-2af5-4372-914e-e0546f8212d6 | available |   my_snap    |  1   |
+--------------------------------------+--------------------------------------+-----------+--------------+------+

On fait une nouvelle écriture.

# echo "Apres snapshot" > /dev/vdb
# dd if=/dev/vdb bs=1 count=14
Apres snapshot14+0 records in
14+0 records out

Pour retourner au snapshot, il faut créer un volume depuis le snapshot et remplacer l’ancien volume de l’instance par celui créé.

[root@hostnamedab ~(keystone_admin)]# nova volume-detach instance2 a3ac8866-2af5-4372-914e-e0546f8212d6
[root@hostnamedab ~(keystone_admin)]# cinder create --snapshot-id 074dedff-4e7d-45d4-ad7e-ff529886dbf3 --display-name my_disk_from_snap 1
+---------------------+--------------------------------------+
|       Property      |                Value                 |
+---------------------+--------------------------------------+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2014-03-25T09:51:29.181605      |
| display_description |                 None                 |
|     display_name    |          my_disk_from_snap           |
|          id         | 695e15df-90f5-46e1-b8c5-e4118d60acdb |
|       metadata      |                  {}                  |
|         size        |                  1                   |
|     snapshot_id     | 074dedff-4e7d-45d4-ad7e-ff529886dbf3 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |                 None                 |
+---------------------+--------------------------------------+
[root@hostnamedab ~(keystone_admin)]# cinder show my_disk_from_snap
+--------------------------------+--------------------------------------+
|            Property            |                Value                 |
+--------------------------------+--------------------------------------+
|          attachments           |                  []                  |
|       availability_zone        |                 nova                 |
|            bootable            |                false                 |
|           created_at           |      2014-03-25T09:51:29.000000      |
|      display_description       |                 None                 |
|          display_name          |          my_disk_from_snap           |
|               id               | 695e15df-90f5-46e1-b8c5-e4118d60acdb |
|            metadata            |                  {}                  |
|     os-vol-host-attr:host      |             hostnamedab              |
| os-vol-mig-status-attr:migstat |                 None                 |
| os-vol-mig-status-attr:name_id |                 None                 |
|  os-vol-tenant-attr:tenant_id  |   5f8ffb039ce844bc94ba031be85e0936   |
|              size              |                  1                   |
|          snapshot_id           | 074dedff-4e7d-45d4-ad7e-ff529886dbf3 |
|          source_volid          |                 None                 |
|             status             |              available               |
|          volume_type           |                 None                 |
+--------------------------------+--------------------------------------+
[root@hostnamedab ~(keystone_admin)]# nova volume-attach instance2 695e15df-90f5-46e1-b8c5-e4118d60acdb auto
+----------+--------------------------------------+
| Property | Value                                |
+----------+--------------------------------------+
| device   | /dev/vdb                             |
| serverId | d49d6328-289d-4a17-a3d6-7847dfe3fdec |
| id       | 695e15df-90f5-46e1-b8c5-e4118d60acdb |
| volumeId | 695e15df-90f5-46e1-b8c5-e4118d60acdb |
+----------+--------------------------------------+

On peut maintenant vérifier dans l’instance que le disque est bien présent et qu’on retrouve bien le message “avant snapshot”.

# dmesg 
...
[60197.575691] pci 0000:00:05.0: no hotplug settings from platform
[60197.575983] pci 0000:00:05.0: using default PCI settings
[60197.596853] virtio-pci 0000:00:05.0: enabling device (0000 -> 0003)
[60197.597769] virtio-pci 0000:00:05.0: PCI INT A -> Link[LNKA] -> GSI 10 (level, high) -> IRQ 10
[60197.598403] virtio-pci 0000:00:05.0: setting latency timer to 64
[60197.605214] virtio-pci 0000:00:05.0: irq 42 for MSI/MSI-X
[60197.605395] virtio-pci 0000:00:05.0: irq 43 for MSI/MSI-X
[60197.658638]  vdb: unknown partition table
# dd if=/dev/vdb bs=1 count=14
Avant snapshot14+0 records in
14+0 records out
No Comments on OpenStack : Exemple d’utilisation pour présenter Cinder

Leave a Reply

Your email address will not be published. Required fields are marked *