Team Meeting Agenda for January 7, 2025
== Agenda for next meeting == * Announcements * Actions from last meeting * Specs Review * Topics ** Zuul-launcher image builds (corvus 20240910) *** Configs are going in opendev/zuul-jobs *** Raw image handling is being sped up through the use of zlib *** Can still use additional image build jobs for the other images we care about. ** Deploying new Noble Servers (clarkb 20250107) *** Changes to manage containers with podman and docker compose on Noble have landed *** Next step is deploying a Noble server for a straightforward service (paste most likely) and observing how that goes. *** Ultimately would like to deploy a new review server on Noble using this system. ** Upgrading Old Servers (clarkb 20230627) *** https://etherpad.opendev.org/p/opendev-bionic-server-upgrades **** wiki.openstack.org: https://etherpad.opendev.org/p/opendev-mediawiki-upgrade **** tonyb looking at cacti after wiki *** https://etherpad.opendev.org/p/opendev-focal-server-upgrades ** Mirroring Useful Container Images (clarkb 20250107) *** One approach to mitigating Docker Hub rate limits is to mirror useful images to another registry like quay.io. *** https://review.opendev.org/c/opendev/system-config/+/938508 Initial change to mirror images that are generically useful ** Gerrit H2 Cache File Growth (clarkb 20250107) *** Gerrit's git_file_diff and gerrit_file_diff caches are implemented as H2 databases on disk. *** These H2 databases are backed with files on disk that only compact when Gerrit is shutdown. *** This leads to files growing quite large which impacts Gerrit startup *** https://review.opendev.org/c/opendev/system-config/+/938000 Suggested workaround from Hashar improves compaction when we do shutdown *** Should we also revert our change to increase the logical size of the cache? This may slow down growth on disk as daily pruning would trim things back regularly (but not compact). ** Rax-ord Noble Nodes With 1 VCPU (clarkb 20241210) *** Occasionally Ubuntu Noble nodes booted in rax-ord have a single vcpu *** Best I can tell nodepool didn't use the wrong flavor. Instead it seems possibly related to an older Xen release being used to boot Noble *** Not sure where the bug is (Xen or Linux) but maybe we can get Rax to do a hypervisor audit and upgrade those behind or remove them from being able to schedule things? *** On our side of things we could have an early base job check for >1 vcpu if Xen is in use. ** Service Coordinator Election (clarkb 20250107) *** Proposing this schedule: **** Nominations Open From February 4, 2025 to February 18, 2025 **** Voting February 19, 2025 to February 26, 2025 **** All times and dates will be UTC based. ** Beginning of the Year (Virtual) Meetup (clarkb 20250107 *** Consider this a spiritual successor to last year's Pre PTG. *** Idea is to take some time, sync up on needs and priorities, and maybe even do some hackathon type work to flush out the backlog *** January 21-23? Open to ideas on timing. * Open discussion
participants (1)
-
Clark Boylan