Team Meeting Agenda for December 19, 2023
We will meet with this agenda on December 19, 2023 at 19:00 UTC in #opendev-meeting: == Agenda for next meeting == * Announcements ** No meetings December 12, 26, and January 2. * Actions from last meeting * Specs Review * Topics ** Upgrading Bionic servers to Focal/Jammy (clarkb 20230627) *** https://etherpad.opendev.org/p/opendev-bionic-server-upgrades *** Tonyb is adding new mirror servers. *** New mirror servers should be ready. Is it time to update CNAMEs and plan for old server cleanup? ** DIB bionic support (ianw 20231206) *** py36 tox unittesting broken -- proposal to drop https://review.opendev.org/c/openstack/diskimage-builder/+/901093 *** unit testing has been kept this long to ensure in-chroot tools are python3.6 clean -- bionic only platform still supported *** one suggestion is to pause bionic builds - see comment in https://review.opendev.org/c/openstack/project-config/+/901692 and comments in PS1 **** clarkb points out this is painful if cloud providers change, and dib _probably_ won't break the build anyway *** second option is to drop tox 36 testing, but leave bionic test in dib-functests. Probably enough coverage for basic support. Should probably do a release first. ** Python container updates (tonyb 20230718) *** https://review.opendev.org/q/(topic:bookworm-python3.11+OR+hashtag:bookworm)... Next round of image rebuilds onto bookworm. *** zuul-operator is the last hold out on python3.10. Working through failures in CI there. **** https://review.opendev.org/c/zuul/zuul-operator/+/881245 is the change we need to get landed. ** Gitea 1.21.1 Upgrade (clarkb 20230926) *** https://review.opendev.org/c/opendev/system-config/+/902490 Configure Gerrit to use new SSH key *** After Gerrit is restarted and using the new key we can remove the old key from Gitea. At that point we should be ready to plan the Gitea upgrade. *** https://review.opendev.org/c/opendev/system-config/+/897679 Upgrade to Gitea 1.21.0 ** Updating Zuul's database server (clarkb 20231121) *** Currently this is an older mysql 5.7 trove instance *** We can move it to a self hosted instance (maybe on a dedicated host?) running out of docker like many of our other services and get it more up to date. *** Are there other services we should consider this for as well? *** Research/Planning questions: https://etherpad.opendev.org/p/opendev-zuul-mysql-upgrade ** Annual Report Season (clarkb 20231128) *** OpenDev's 2023 Annual Report Draft will live here: https://etherpad.opendev.org/p/2023-opendev-annual-report ** EMS discontinuing legacy/consumer hosting plans (fungi 20231219) *** We have until 2024-02-07 to upgrade to a business hosting plan (prepaying a year at 10x the current price) or move elsewhere. ** Followup on 20231216 incident (frickler 20231217) *** Do we want to pin external images like haproxy and only bump them after testing? (Not sure that would've helped for the current issue though) *** Use docker prune less aggressively for easier rollback? **** We do so for some services, like https://opendev.org/opendev/system-config/src/branch/master/playbooks/roles/..., might want to duplicate for all containers? Bump the hold time to 7d? *** Add timestamps to zuul_reboot.log? **** https://opendev.org/opendev/system-config/src/branch/master/playbooks/servic... **** Also this is running on Saturdays (weekday: 6), do we want to fix the comment or the dow? *** Do we want to document or implement a procedure for rolling back zuul upgrades? Or do we assume that issues can always be fixed in a forward going way? ** AFS quota issues (frickler 20231217) *** mirror.openeuler has reached its quota limit and the mirror job seems to be failing since two weeks. I'm also a bit worried that they seem do have doubled their volume over the last 12 months *** ubuntu mirrors are also getting close, but we might have another couple of months time there *** mirror.centos-stream seems to have a steep increase in the last two months and might also run into quota limits soon *** project.zuul with the latest releases is getting close to its tight limit of 1GB (sic), I suggest to simply double that ** Broken wheel build issues (frickler 20231217) *** wheel builds for centos >=8 seem broken, with nobody maintaining these it might be better to drop them? * Open discussion ** (tonyb 20231128) [If time permits]. Could we enable via roles/jobs or with an additional nodepool driver. The ability for [OpenStack] project teams to run unit tests with the python images we already build eg https://review.opendev.org/c/opendev/system-config/+/898756. -- Jeremy Stanley
participants (1)
-
Jeremy Stanley