Team Meeting Agenda for July 9, 2024
We will meet with this agenda on July 9, 2024 at 19:00 UTC in #opendev-meeting: == Agenda for next meeting == * Announcements ** Clarkb out next week through Wednesday. * Actions from last meeting * Specs Review * Topics ** Upgrading Old Servers (clarkb 20230627) *** https://etherpad.opendev.org/p/opendev-bionic-server-upgrades **** wiki.openstack.org: https://etherpad.opendev.org/p/opendev-mediawiki-upgrade **** tonyb looking at cacti after wiki *** https://etherpad.opendev.org/p/opendev-focal-server-upgrades **** tonyb expects to try a simple focal -> noble upgrade this week ** AFS Mirror cleanups (clarkb 20240220) *** Ubuntu Xenial cleanups are starting to show up under topic:drop-ubuntu-xenial *** CentOS 8 Stream EOLd and jobs can no longer successfully run there. Cleanup is happening under topic:drop-centos-8-stream **** What do we think about a forceful removal of centos 8 stream fips jobs at this point? *** Can followup with webserver log processing to determine which other mirrors may be dead. ** Gitea 1.22 Upgrade Planning (clarkb 20240528) *** https://review.opendev.org/c/opendev/system-config/+/920580 *** https://104.130.219.4:3081/opendev/system-config 1.22.1 Held node *** Clarkb is thinking we can upgrade to 1.22.1 then figure out the db correcting after the upgrade. ** Etherpad 2.1.1 Upgrade (clarkb 20240709) *** https://review.opendev.org/c/opendev/system-config/+/923661 ** Testing Rackspace's New Cloud Offering (clarkb 20240604) *** Clarkb is still waiting to hear back from rax on whether or not a meeting works to discuss this further and if so when. ** Drop x/* projects with config errors from zuul (frickler 20240706) *** Patch proposed at https://review.opendev.org/c/openstack/project-config/+/923509 *** Send a final warning to the ML first? Or is there a volunteer to chase down individual projects? ** zuul db performance issues (frickler 20240705) *** https://zuul.opendev.org/t/openstack/buildsets times out without showing results *** https://zuul.opendev.org/t/openstack/buildsets?pipeline=gate takes 20-30s *** Makes investigating gate status more complicated *** Not sure if our deployment is reaching its limits or Zuul needs some more query optimization? ** Reconsider queue depth for integrated gate (frickler 20240705) *** During the rush to get fixes for the recent CVE merged, a lot of gate failures happened *** Chances that patch #20 in the pipeline gets merged without any issue in the 19 patches before it seems neglegible *** Suggestion: Reduce depth to 10 for now, which still seems pretty optimistic * Open discussion
participants (1)
-
Clark Boylan