Discussion:
[openstack-dev] [nova][ironic] ironic_host_manager and baremetal scheduler options removal
Matt Riedemann
2018-05-02 16:25:25 UTC
Permalink
The baremetal scheduling options were deprecated in Pike [1] and the
ironic_host_manager was deprecated in Queens [2] and is now being
removed [3]. Deployments must use resource classes now for baremetal
scheduling. [4]

The large host subset size value is also no longer needed. [5]

I've gone through all of the references to "ironic_host_manager" that I
could find in codesearch.o.o and updated projects accordingly [6].

Please reply ASAP to this thread and/or [3] if you have issues with this.

[1] https://review.openstack.org/#/c/493052/
[2] https://review.openstack.org/#/c/521648/
[3] https://review.openstack.org/#/c/565805/
[4]
https://docs.openstack.org/ironic/latest/install/configure-nova-flavors.html#scheduling-based-on-resource-classes
[5] https://review.openstack.org/565736/
[6]
https://review.openstack.org/#/q/topic:exact-filters+(status:open+OR+status:merged)
--
Thanks,

Matt

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-***@lists.openstack.org?subject:unsubscribe
http://lists.opensta
Mathieu Gagné
2018-05-02 16:40:56 UTC
Permalink
What's the state of caching_scheduler which could still be using those configs?

Mathieu
Post by Matt Riedemann
The baremetal scheduling options were deprecated in Pike [1] and the
ironic_host_manager was deprecated in Queens [2] and is now being removed
[3]. Deployments must use resource classes now for baremetal scheduling. [4]
The large host subset size value is also no longer needed. [5]
I've gone through all of the references to "ironic_host_manager" that I
could find in codesearch.o.o and updated projects accordingly [6].
Please reply ASAP to this thread and/or [3] if you have issues with this.
[1] https://review.openstack.org/#/c/493052/
[2] https://review.openstack.org/#/c/521648/
[3] https://review.openstack.org/#/c/565805/
[4]
https://docs.openstack.org/ironic/latest/install/configure-nova-flavors.html#scheduling-based-on-resource-classes
[5] https://review.openstack.org/565736/
[6]
https://review.openstack.org/#/q/topic:exact-filters+(status:open+OR+status:merged)
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-***@lists.openstack.org?subject:unsubscribe
http://lists.o
Matt Riedemann
2018-05-02 16:49:46 UTC
Permalink
Post by Mathieu Gagné
What's the state of caching_scheduler which could still be using those configs?
The CachingScheduler has been deprecated since Pike [1]. We discussed
the CachingScheduler at the Rocky PTG in Dublin [2] and have a TODO to
write a nova-manage data migration tool to create allocations in
Placement for instances that were scheduled using the CachingScheduler
(since Pike) which don't have their own resource allocations set in
Placement (remember that starting in Pike the FilterScheduler started
creating allocations in Placement rather than the ResourceTracker in
nova-compute).

If you're running computes that are Ocata or Newton, then the
ResourceTracker in the nova-compute service should be creating the
allocations in Placement for you, assuming you have the compute service
configured to talk to Placement (optional in Newton, required in Ocata).

[1] https://review.openstack.org/#/c/492210/
[2] https://etherpad.openstack.org/p/nova-ptg-rocky-placement
--
Thanks,

Matt
Mathieu Gagné
2018-05-02 17:00:46 UTC
Permalink
Post by Mathieu Gagné
What's the state of caching_scheduler which could still be using those configs?
The CachingScheduler has been deprecated since Pike [1]. We discussed the
CachingScheduler at the Rocky PTG in Dublin [2] and have a TODO to write a
nova-manage data migration tool to create allocations in Placement for
instances that were scheduled using the CachingScheduler (since Pike) which
don't have their own resource allocations set in Placement (remember that
starting in Pike the FilterScheduler started creating allocations in
Placement rather than the ResourceTracker in nova-compute).
If you're running computes that are Ocata or Newton, then the
ResourceTracker in the nova-compute service should be creating the
allocations in Placement for you, assuming you have the compute service
configured to talk to Placement (optional in Newton, required in Ocata).
[1] https://review.openstack.org/#/c/492210/
[2] https://etherpad.openstack.org/p/nova-ptg-rocky-placement
If one can still run CachingScheduler (even if it's deprecated), I
think we shouldn't remove the above options.
As you can end up with a broken setup and IIUC no way to migrate to
placement since migration script has yet to be written.

--
Mathieu

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-***@lists.openstack.org?subject:unsubscribe
http://li
Matt Riedemann
2018-05-02 17:39:03 UTC
Permalink
Post by Mathieu Gagné
If one can still run CachingScheduler (even if it's deprecated), I
think we shouldn't remove the above options.
As you can end up with a broken setup and IIUC no way to migrate to
placement since migration script has yet to be written.
You're currently on cells v1 on mitaka right? So you have some time to
get this sorted out before getting to Rocky where the IronicHostManager
is dropped.

I know you're just one case, but I don't know how many people are really
running the CachingScheduler with ironic either, so it might be rare. It
would be nice to get other operator input here, like I'm guessing CERN
has their cells carved up so that certain cells are only serving
baremetal requests while other cells are only VMs?

FWIW, I think we can also backport the data migration CLI to stable
branches once we have it available so you can do your migration in let's
say Queens before getting to Rocky.
--
Thanks,

Matt
Mathieu Gagné
2018-05-02 17:48:06 UTC
Permalink
Post by Matt Riedemann
I know you're just one case, but I don't know how many people are really
running the CachingScheduler with ironic either, so it might be rare. It
would be nice to get other operator input here, like I'm guessing CERN has
their cells carved up so that certain cells are only serving baremetal
requests while other cells are only VMs?
I found FilterScheduler to be near impossible to use with Ironic due
to the huge number of hypervisors it had to handle.
Using CachingScheduler made a huge difference, like day and night.
Post by Matt Riedemann
FWIW, I think we can also backport the data migration CLI to stable branches
once we have it available so you can do your migration in let's say Queens
before getting to Rocky.
--
Mathieu
Matt Riedemann
2018-05-03 00:47:01 UTC
Permalink
Post by Matt Riedemann
FWIW, I think we can also backport the data migration CLI to stable
branches once we have it available so you can do your migration in let's
say Queens before g
FYI, here is the start on the data migration CLI:

https://review.openstack.org/#/c/565886/
--
Thanks,

Matt

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-***@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-b
Loading...