Discussion:
[openstack-dev] [cinder] Can I use lvm thin provisioning in mitaka?
Marco Marino
2017-01-20 10:07:46 UTC
Permalink
Hi, I'm trying to use cinder with lvm thin provisioning. It works well and
I'd like to know if there is some reason lvm thin should be avoided in
mitaka release. I'm trying to use with
max_over_subscription_ratio = 1.0
so I don't have problems with over subscription.
I using thin provisioning because it is fast (I think). More precisely, my
use case is:

- create one bootable volume. This is a long operation because cinder
download the image from glance, qemu-img convert in raw format and then
"dd" copy the image in the volume.
- Create a snapshot of the bootable volume. Really fast and reliable
because the original volume is not used by any vm.
- Create a new volume from the snapshot. This is faster than create a new
bootable volume.

Is this use correct? Can I deploy in the production environment (mitaka -
centos 7)
Thank you
Chris Friesen
2017-01-20 16:54:26 UTC
Permalink
Hi, I'm trying to use cinder with lvm thin provisioning. It works well and I'd
like to know if there is some reason lvm thin should be avoided in mitaka
release. I'm trying to use with
max_over_subscription_ratio = 1.0
so I don't have problems with over subscription.
I using thin provisioning because it is fast (I think). More precisely, my use
- create one bootable volume. This is a long operation because cinder download
the image from glance, qemu-img convert in raw format and then "dd" copy the
image in the volume.
- Create a snapshot of the bootable volume. Really fast and reliable because the
original volume is not used by any vm.
- Create a new volume from the snapshot. This is faster than create a new
bootable volume.
Is this use correct? Can I deploy in the production environment (mitaka - centos 7)
Thank you
For what it's worth we're using cinder with LVM thin provisioning in production
with no overprovisioning.

What you're proposing should work, you're basically caching the vanilla image as
a cinder snapshot.

If you wish to speed up volume deletion, you can set "volume_clear=none" in the
cinder.conf file.

Be aware that LVM thin provisioning will see a performance penalty the first
time you write to a given disk block in a volume, because it needs to allocate a
new block, zero it out, then write the new data to it.

Chris
Duncan Thomas
2017-01-20 17:24:50 UTC
Permalink
There's also cinder functionality called the 'generic image cache' that
does this for you; see the (per-backend) config options:
image_volume_cache_enabled, image_volume_cache_max_size_gb and
image_volume_cache_max_count
Post by Chris Friesen
Hi, I'm trying to use cinder with lvm thin provisioning. It works well and I'd
like to know if there is some reason lvm thin should be avoided in mitaka
release. I'm trying to use with
max_over_subscription_ratio = 1.0
so I don't have problems with over subscription.
I using thin provisioning because it is fast (I think). More precisely, my use
- create one bootable volume. This is a long operation because cinder download
the image from glance, qemu-img convert in raw format and then "dd" copy the
image in the volume.
- Create a snapshot of the bootable volume. Really fast and reliable because the
original volume is not used by any vm.
- Create a new volume from the snapshot. This is faster than create a new
bootable volume.
Is this use correct? Can I deploy in the production environment (mitaka - centos 7)
Thank you
For what it's worth we're using cinder with LVM thin provisioning in
production with no overprovisioning.
What you're proposing should work, you're basically caching the vanilla
image as a cinder snapshot.
If you wish to speed up volume deletion, you can set "volume_clear=none"
in the cinder.conf file.
Be aware that LVM thin provisioning will see a performance penalty the
first time you write to a given disk block in a volume, because it needs to
allocate a new block, zero it out, then write the new data to it.
Chris
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
--
--
Duncan Thomas
Marco Marino
2017-01-21 09:00:42 UTC
Permalink
Really thank you!! It's difficult for me find help on cinder and I think
this is the right place!
@Duncan, if my goal is to speeding up bootable volume creation, I can avoid
to use thin provisioning. I can use image cache and in this way the
"retrieve from glance" and the "qemu-img convert to RAW" parts will be
skipped. Is this correct? And whit this method I don't have a performancy
penalty mentioned by Chris.
@Chris: Yes, I'm using volume_clear option and volume deletion is very fast

Marco
Post by Duncan Thomas
There's also cinder functionality called the 'generic image cache' that
image_volume_cache_enabled, image_volume_cache_max_size_gb and
image_volume_cache_max_count
Post by Chris Friesen
Hi, I'm trying to use cinder with lvm thin provisioning. It works well and I'd
like to know if there is some reason lvm thin should be avoided in mitaka
release. I'm trying to use with
max_over_subscription_ratio = 1.0
so I don't have problems with over subscription.
I using thin provisioning because it is fast (I think). More precisely, my use
- create one bootable volume. This is a long operation because cinder download
the image from glance, qemu-img convert in raw format and then "dd" copy the
image in the volume.
- Create a snapshot of the bootable volume. Really fast and reliable because the
original volume is not used by any vm.
- Create a new volume from the snapshot. This is faster than create a new
bootable volume.
Is this use correct? Can I deploy in the production environment (mitaka - centos 7)
Thank you
For what it's worth we're using cinder with LVM thin provisioning in
production with no overprovisioning.
What you're proposing should work, you're basically caching the vanilla
image as a cinder snapshot.
If you wish to speed up volume deletion, you can set "volume_clear=none"
in the cinder.conf file.
Be aware that LVM thin provisioning will see a performance penalty the
first time you write to a given disk block in a volume, because it needs to
allocate a new block, zero it out, then write the new data to it.
Chris
____________________________________________________________
______________
OpenStack Development Mailing List (not for usage questions)
e
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
--
--
Duncan Thomas
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Chris Friesen
2017-01-23 16:21:17 UTC
Permalink
Really thank you!! It's difficult for me find help on cinder and I think this is
the right place!
@Duncan, if my goal is to speeding up bootable volume creation, I can avoid to
use thin provisioning. I can use image cache and in this way the "retrieve from
glance" and the "qemu-img convert to RAW" parts will be skipped. Is this
correct? And whit this method I don't have a performancy penalty mentioned by Chris.
@Chris: Yes, I'm using volume_clear option and volume deletion is very fast
Just to be clear, you should not use "volume_clear=none" unless you are using
thin provisioning or you do not care about security.

If you have "volume_clear=none" with thick LVM, then newly created cinder
volumes may contain data written to the disk via other cinder volumes that were
later deleted.

Chris
Marco Marino
2017-01-23 17:29:01 UTC
Permalink
At the moment I have:
volume_clear=zero
volume_clear_size=30 <-- MBR will be deleted here!
with tick provisioning
I think this can be a good solution in my case. Let me know what do you
think about this.
Thank you
Marco
Post by Chris Friesen
Really thank you!! It's difficult for me find help on cinder and I think this is
the right place!
@Duncan, if my goal is to speeding up bootable volume creation, I can avoid to
use thin provisioning. I can use image cache and in this way the "retrieve from
glance" and the "qemu-img convert to RAW" parts will be skipped. Is this
correct? And whit this method I don't have a performancy penalty mentioned by Chris.
@Chris: Yes, I'm using volume_clear option and volume deletion is very fast
Just to be clear, you should not use "volume_clear=none" unless you are
using thin provisioning or you do not care about security.
If you have "volume_clear=none" with thick LVM, then newly created cinder
volumes may contain data written to the disk via other cinder volumes that
were later deleted.
Chris
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Chris Friesen
2017-01-23 17:36:51 UTC
Permalink
Post by Marco Marino
volume_clear=zero
volume_clear_size=30 <-- MBR will be deleted here!
with tick provisioning
I think this can be a good solution in my case. Let me know what do you think
about this.
If security is not a concern then that's fine.

Chris

Loading...