From ildiko at openstack.org Mon Sep 3 18:28:44 2018 From: ildiko at openstack.org (Ildiko Vancsa) Date: Mon, 3 Sep 2018 12:28:44 -0600 Subject: [Edge-computing] PTG schedule plan Message-ID: Hi All, The PTG is approaching quickly, it is less than a week away now! :) I cleaned up our PTG schedule for Tuesday next week and created separate etherpads for each topic so we can keep the notes under control: https://etherpad.openstack.org/p/EdgeComputingGroupPTG4 Please take a look at the schedule and the topics on the etherpads and let me know if there’s anything to add/correct on it by tomorrow (September 4) 11:59pm PDT. I will send out e-mails Wednesday morning after fixing any issues you might identify to the relevant mailing lists to remind people to our agenda so they can join the topics they are interested in. Please let me know if you have any questions. Thanks and Best Regards, Ildikó From ildiko at openstack.org Tue Sep 4 13:57:32 2018 From: ildiko at openstack.org (Ildiko Vancsa) Date: Tue, 4 Sep 2018 07:57:32 -0600 Subject: [Edge-computing] APAC meeting slot this week Message-ID: <3FAE152F-EC6A-45E2-B099-EEDCFBFDA598@openstack.org> Hi, It is a friendly reminder that we are having the APAC meeting slot this week, which is Thursday 0700 UTC. For information about the Edge Group and the sub-group activities please visit the wiki: https://wiki.openstack.org/wiki/Edge_Computing_Group We are planning for the PTG (https://www.openstack.org/ptg) that is happening next week. Please find the planned agenda on this etherpad: https://etherpad.openstack.org/p/EdgeComputingGroupPTG4 Please let me know if you have any questions. Thanks and Best Regards, Ildikó From ildiko at openstack.org Tue Sep 4 16:24:23 2018 From: ildiko at openstack.org (Ildiko Vancsa) Date: Tue, 4 Sep 2018 10:24:23 -0600 Subject: [Edge-computing] ONS Europe Edge meetup - ACTION NEEDED Message-ID: <302A5FDD-A616-4AAF-B00C-A8E6D63E72AB@openstack.org> Hi All, There are a few of us who’s planning to attend ONS Europe at the end of the month: https://events.linuxfoundation.org/events/open-networking-summit-europe-2018/ We were planning to organize an ad-hoc session to talk about edge like how we did at ONS NA in March this year. In order to find the best option as time and place for this meetup I created an etherpad to coordinate: https://etherpad.openstack.org/p/EdgeMeetup-ONS-EU-2018 __If you are attending the event and interested in participating in this gathering please add your name to the etherpad above as soon as possible.__ Please let me know if you have any questions or comments. Thanks and Best Regards, Ildikó From ildiko at openstack.org Wed Sep 5 20:51:30 2018 From: ildiko at openstack.org (Ildiko Vancsa) Date: Wed, 5 Sep 2018 15:51:30 -0500 Subject: [Edge-computing] PTG information and agenda Message-ID: <6A093108-8A0E-4AFE-98E8-F97EA53F69C7@openstack.org> Hi, The next PTG (https://www.openstack.org/ptg) is just around the corner staring next Monday (Sept 10) and ends on Friday (Sept 14). As most of you know this is a developer focused event to provide space and time for face to face technical discussions relevant to the work the project teams are doing and for cross-project collaboration. To emphasize on collaboration you will find representation from all the projects under the OpenStack Foundation umbrella and the event is also co-located with the OpenStack Ops Meetup. The Edge Computing Group is meeting on Tuesday in Ballroom A (https://web14.openstack.org/assets/ptg/Denver-map.pdf) from 9am MDT. You can find our agenda on this etherpad: https://etherpad.openstack.org/p/EdgeComputingGroupPTG4 As mentioned earlier we will do our best to provide an option to participate remotely. I will post a Zoom link and dial-in info on the etherpad above before the sessions start on Tuesday. I will also distribute the information here on the mailing list. As the time slots on the agenda are estimates we are using a dedicated web page (http://ptg.openstack.org/ptg.html) to track the current and next topics and also a dedicated IRC channel (#openstack-ptg) for communication and news sharing. You can find the links to all PTG related resources here: http://ptg.openstack.org Also please note that the weekly call this week is in the APAC time slot, Thursday 0700 UTC and we are cancelling next week’s meeting due to the discussions taking place at the PTG. Please let me know if you have any questions about the event or the Edge related sessions. Thanks and Best Regards, Ildikó From fdelicato at gmail.com Wed Sep 5 23:07:13 2018 From: fdelicato at gmail.com (Flavia Delicato) Date: Wed, 5 Sep 2018 20:07:13 -0300 Subject: [Edge-computing] IEEE Fog Computing: Call for Contributions Message-ID: ================================================================================= IEEE International Conference on Fog Computing (ICFC 2019) June 24-26, 2019 Prague, Czech Republic http://conferences.computer.org/ICFC/2019/ Colocated with the IEEE International Conference on Cloud Engineering (IC2E 2019) ================================================================================== Important Dates --------------- Paper registration and abstract: Nov 1st, 2018 Full paper submission due: Nov 8th, 2018 Notification of paper acceptance: Jan. 20th, 2019 Workshop and tutorial proposals due: Nov 11, 2018 Notification of proposal acceptance: Nov 18, 2018 Call for Contributions ---------------------- Fog computing is the extension of cloud computing into its edge and the physical world to meet the data volume and decision velocity requirements in many emerging applications, such as augmented and virtual realities (AR/VR), cyber-physical systems (CPS), intelligent and autonomous systems, and mission-critical systems. The boundary between centralized, powerful computing cloud and massively distributed, Internet connected sensors, actuators, and things is blurred in this new computing paradigm. The ICFC 2019 technical program will feature tutorials, workshops, and research paper sessions. We solicit high-quality contributions in the above categories. Details of submission is available on the conference Web site. Topics of interest include but are not limited to: * System architecture for fog computing * Coordination between cloud, fog, and sensing/actuation endpoints * Connectivity, storage, and computation in the edge * Data processing and management for fog computing * Efficient and embedded AI in the fog * System and network manageability * Middleware and coordination platforms * Power, energy, and resource management * Device and hardware support for fog computing * Programming models, abstractions, and software engineering for fog computing * Security, privacy, and ethics issues related to fog computing * Theoretical foundations and formal methods for fog computing systems * Applications and experiences Organizing Committee -------------------- General Chairs: Hui Lei, IBM Albert Zomaya, The University of Sydney PC Co-chairs: Erol Gelenbe, Imperial College London Jie Liu, Microsoft Research Tutorials and Workshops Chair: David Bermbach, TU Berlin Publicity Co-chairs: Flavia Delicato,Federal University of Rio de Janeiro Mathias Fischer, University Hamburg Publication Chair Javid Taheri, Karlstad University Webmaster Wei Li, The University of Sydney Steering Committee ------------------ Mung Chiang, Purdue University Erol Gelenbe, Imperial College London Christos Kozarakis, Stanford University Hui Lei, IBM Chenyang Lu, Washington University in St Louis Beng Chin Ooi, National University of Singapore Neeraj Suri, TU Darmstadt Albert Zomaya, The University of Sydney -- Flávia Delicato Associate Professor Federal University of Rio de Janeiro --- From ildiko at openstack.org Tue Sep 11 13:32:18 2018 From: ildiko at openstack.org (Ildiko Vancsa) Date: Tue, 11 Sep 2018 07:32:18 -0600 Subject: [Edge-computing] PTG information and agenda In-Reply-To: <6A093108-8A0E-4AFE-98E8-F97EA53F69C7@openstack.org> References: <6A093108-8A0E-4AFE-98E8-F97EA53F69C7@openstack.org> Message-ID: <7A7DDD17-48C9-4899-9080-E59A7DB0FEB2@openstack.org> Hi, We are having our whole day sessions at the PTG around edge related topics. You can dial in remotely using the following call details: • Zoom link: https://zoom.us/j/736245798 • Dialing in from phone: • Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 • Meeting ID: 736 245 798 • International numbers available: https://zoom.us/u/ed95sU7aQ For the agenda please see the following etherpad: https://etherpad.openstack.org/p/EdgeComputingGroupPTG4 For the daily progress please monitor #openstack-ptg on IRC and/or the following web page: http://ptg.openstack.org/ptg.html Please let me know if you have any questions. Thanks and Best Regards, Ildikó (IRC: ildikov) > On 2018. Sep 5., at 14:51, Ildiko Vancsa wrote: > > Hi, > > The next PTG (https://www.openstack.org/ptg) is just around the corner staring next Monday (Sept 10) and ends on Friday (Sept 14). > > As most of you know this is a developer focused event to provide space and time for face to face technical discussions relevant to the work the project teams are doing and for cross-project collaboration. > > To emphasize on collaboration you will find representation from all the projects under the OpenStack Foundation umbrella and the event is also co-located with the OpenStack Ops Meetup. > > The Edge Computing Group is meeting on Tuesday in Ballroom A (https://web14.openstack.org/assets/ptg/Denver-map.pdf) from 9am MDT. You can find our agenda on this etherpad: https://etherpad.openstack.org/p/EdgeComputingGroupPTG4 > > As mentioned earlier we will do our best to provide an option to participate remotely. I will post a Zoom link and dial-in info on the etherpad above before the sessions start on Tuesday. I will also distribute the information here on the mailing list. > > As the time slots on the agenda are estimates we are using a dedicated web page (http://ptg.openstack.org/ptg.html) to track the current and next topics and also a dedicated IRC channel (#openstack-ptg) for communication and news sharing. > > You can find the links to all PTG related resources here: http://ptg.openstack.org > > Also please note that the weekly call this week is in the APAC time slot, Thursday 0700 UTC and we are cancelling next week’s meeting due to the discussions taking place at the PTG. > > Please let me know if you have any questions about the event or the Edge related sessions. > > Thanks and Best Regards, > Ildikó > > > > _______________________________________________ > Edge-computing mailing list > Edge-computing at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing From ildiko at openstack.org Tue Sep 11 19:44:59 2018 From: ildiko at openstack.org (Ildiko Vancsa) Date: Tue, 11 Sep 2018 13:44:59 -0600 Subject: [Edge-computing] PTG information and agenda In-Reply-To: <7A7DDD17-48C9-4899-9080-E59A7DB0FEB2@openstack.org> References: <6A093108-8A0E-4AFE-98E8-F97EA53F69C7@openstack.org> <7A7DDD17-48C9-4899-9080-E59A7DB0FEB2@openstack.org> Message-ID: <7051AC44-170C-4631-8E48-B7B6067AA71F@openstack.org> The afternoon sessions have just started, the dial-in information is the same. Thanks, Ildikó > On 2018. Sep 11., at 7:32, Ildiko Vancsa wrote: > > Hi, > > We are having our whole day sessions at the PTG around edge related topics. > > You can dial in remotely using the following call details: > > • Zoom link: https://zoom.us/j/736245798 > • Dialing in from phone: > • Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 > • Meeting ID: 736 245 798 > • International numbers available: https://zoom.us/u/ed95sU7aQ > > For the agenda please see the following etherpad: https://etherpad.openstack.org/p/EdgeComputingGroupPTG4 > > For the daily progress please monitor #openstack-ptg on IRC and/or the following web page: http://ptg.openstack.org/ptg.html > > Please let me know if you have any questions. > > Thanks and Best Regards, > Ildikó > (IRC: ildikov) > > >> On 2018. Sep 5., at 14:51, Ildiko Vancsa wrote: >> >> Hi, >> >> The next PTG (https://www.openstack.org/ptg) is just around the corner staring next Monday (Sept 10) and ends on Friday (Sept 14). >> >> As most of you know this is a developer focused event to provide space and time for face to face technical discussions relevant to the work the project teams are doing and for cross-project collaboration. >> >> To emphasize on collaboration you will find representation from all the projects under the OpenStack Foundation umbrella and the event is also co-located with the OpenStack Ops Meetup. >> >> The Edge Computing Group is meeting on Tuesday in Ballroom A (https://web14.openstack.org/assets/ptg/Denver-map.pdf) from 9am MDT. You can find our agenda on this etherpad: https://etherpad.openstack.org/p/EdgeComputingGroupPTG4 >> >> As mentioned earlier we will do our best to provide an option to participate remotely. I will post a Zoom link and dial-in info on the etherpad above before the sessions start on Tuesday. I will also distribute the information here on the mailing list. >> >> As the time slots on the agenda are estimates we are using a dedicated web page (http://ptg.openstack.org/ptg.html) to track the current and next topics and also a dedicated IRC channel (#openstack-ptg) for communication and news sharing. >> >> You can find the links to all PTG related resources here: http://ptg.openstack.org >> >> Also please note that the weekly call this week is in the APAC time slot, Thursday 0700 UTC and we are cancelling next week’s meeting due to the discussions taking place at the PTG. >> >> Please let me know if you have any questions about the event or the Edge related sessions. >> >> Thanks and Best Regards, >> Ildikó >> >> >> >> _______________________________________________ >> Edge-computing mailing list >> Edge-computing at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing > > > _______________________________________________ > Edge-computing mailing list > Edge-computing at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing From Greg.Waines at windriver.com Wed Sep 12 16:08:42 2018 From: Greg.Waines at windriver.com (Waines, Greg) Date: Wed, 12 Sep 2018 16:08:42 +0000 Subject: [Edge-computing] MVP Edge Architecture Message-ID: <4F5F93AF-BBD4-40A2-A1F5-EDDD833C46E2@windriver.com> Hey James, I dumped my understanding of the MVP Edge Architecture from yesterday into some slides ... before I forget it all :) . Let me know if you think this accurately reflects yesterday afternoon’s architecture discussion in the edge-computing group. https://www.dropbox.com/s/255x1cao14taer3/MVP-Architecture_edge-computing_PTG.pptx?dl=0 NOTE: I did have one question in red on Slide 5 Greg. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gergely.csatari at nokia.com Wed Sep 12 21:25:32 2018 From: gergely.csatari at nokia.com (Csatari, Gergely (Nokia - HU/Budapest)) Date: Wed, 12 Sep 2018 21:25:32 +0000 Subject: [Edge-computing] Status of Keystone federation testing with tempest In-Reply-To: References: Message-ID: Hi, Some update on the open issues. - Make this work 😉 - here I have some progress, however I can not explain why. Now keystone is able to reach Shibboleth and Shibboleth answers with FatalProfileException "A valid authentication statement was not found in the incoming message.". I continue to figure out what is the problem. - Set the idp address in the correct place - This is done thanks to gmann. - Figure out how to start a Container in a Keystone plugin or a tempest plugin - Here I try to use https://github.com/openstack/devstack-plugin-container however I'm not sure if this is the right tool to start containers in DevStack environment. - Figure out ow to integrate with CI - no progress on this I'm still happy get any help either in mail, IRC or in person on the PTG. Thanks, Gerg0 ________________________________ From: Csatari, Gergely (Nokia - HU/Budapest) Sent: Friday, August 31, 2018 1:03:43 PM To: nick at stackhpc.com; knikolla at bu.edu; colleen at gazlene.net; mbuil at suse.com; edge-computing at lists.openstack.org Subject: Status of Keystone federation testing with tempest Hi, I'm working on this for a while, but as I am not a big expert of IdP, Keysone or Tempest I have a bit slow progress. I decided to share what I did and what are my current probelms to 1) inform the team about the progress 2) keep a record for myself 3) hoping for help and/or hints. So I did this: 1) Get an Ubuntu 2) Install devstack with enable_plugin keystone git://git.openstack.org/openstack/keystone enable_service keystone-saml2-federation Here I already ran into some package maangement issues due to some libcurl3 and libcurl4 incompatibility issue what I solved using https://launchpad.net/~xapienz/+archive/ubuntu/curl34 3) Install the Keystone tempest plugin 4) Build a Shibboleth IdP container based on https://github.com/Unicon/shibboleth-idp-dockerized with the configuration I believe is correct. I have a feeling that we will need to set a proper organisation for this if we want to publish this to Docker Hub. By the way is there a container registry maintained in the OpenStack development infra? 5) Run the container and expose 8080, 4443 and 8443 ports This is a half success. Shibboleth contacts Keystone (or actually the Shibboleth apache module) for metadata update, but it works only on the first attempt. The regular updates are not working for some reason. Also I was not able to get a positive answer from the status script of Shibboleth itself, so i just decided to move a bit forward. 6) Set idp_url to https://localhost:8080/idp/profile/SAML2/SOAP/ECP in _request_unscoped_token inside the Keystone tempest plugin. Here I have no idea where the configuration is actually stores and where should I set this in a nice way. 7) Run the tempest tests. Now here I get an error message which tells me about SSL version numbers (hands.hake: Error([('SSL routines', 'ssl3_get_record', 'wrong version number')],)",),))). I tried to use different ssl versions with Curl, but it complains about the lack of support in libsso. So here I am now. Things what I deffinetly should figure out: - Make this work 😉 - Set the idp address in the correct place - Figure out how to start a Container in a Keystone plugin or a tempest plugin - Figure out ow to integrate with CI Any comments are welcome. Br, Gerg0 Curl 3 and 4 : Evgeny Brazgin - launchpad.net launchpad.net PPA contains libcurl4 package, which supports both libcurl3 and libcurl4 API. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Greg.Waines at windriver.com Fri Sep 14 17:20:31 2018 From: Greg.Waines at windriver.com (Waines, Greg) Date: Fri, 14 Sep 2018 17:20:31 +0000 Subject: [Edge-computing] MVP Edge Architecture In-Reply-To: References: <4F5F93AF-BBD4-40A2-A1F5-EDDD833C46E2@windriver.com> Message-ID: greg.waines at gmail.com From: James Penick Date: Friday, September 14, 2018 at 1:19 PM To: Greg Waines Cc: "gergely.csatari at nokia.com" , "ildiko at openstack.org" , "gfidente at redhat.com" , "pramchan at yahoo.com" , "jslagle at redhat.com" , "shardy at redhat.com" , Kent S Gordon , "edge-computing at lists.openstack.org" Subject: Re: MVP Edge Architecture Hey Greg, Do you have a gmail account? I've created a google doc for us all to collaborate on On Wed, Sep 12, 2018 at 10:10 AM Waines, Greg > wrote: Hey James, I dumped my understanding of the MVP Edge Architecture from yesterday into some slides ... before I forget it all :) . Let me know if you think this accurately reflects yesterday afternoon’s architecture discussion in the edge-computing group. https://www.dropbox.com/s/255x1cao14taer3/MVP-Architecture_edge-computing_PTG.pptx?dl=0 NOTE: I did have one question in red on Slide 5 Greg. -- :)= -------------- next part -------------- An HTML attachment was scrubbed... URL: From penick at oath.com Fri Sep 14 17:19:09 2018 From: penick at oath.com (James Penick) Date: Fri, 14 Sep 2018 11:19:09 -0600 Subject: [Edge-computing] MVP Edge Architecture In-Reply-To: <4F5F93AF-BBD4-40A2-A1F5-EDDD833C46E2@windriver.com> References: <4F5F93AF-BBD4-40A2-A1F5-EDDD833C46E2@windriver.com> Message-ID: Hey Greg, Do you have a gmail account? I've created a google doc for us all to collaborate on On Wed, Sep 12, 2018 at 10:10 AM Waines, Greg wrote: > Hey James, > > > > I dumped my understanding of the MVP Edge Architecture from yesterday into > some slides ... before I forget it all :) . > > Let me know if you think this accurately reflects yesterday afternoon’s > architecture discussion in the edge-computing group. > > > > > https://www.dropbox.com/s/255x1cao14taer3/MVP-Architecture_edge-computing_PTG.pptx?dl=0 > > > > NOTE: I did have one question in red on Slide 5 > > > > Greg. > -- :)= -------------- next part -------------- An HTML attachment was scrubbed... URL: From gergely.csatari at nokia.com Fri Sep 14 22:36:32 2018 From: gergely.csatari at nokia.com (Csatari, Gergely (Nokia - HU/Budapest)) Date: Fri, 14 Sep 2018 22:36:32 +0000 Subject: [Edge-computing] Status of Keystone federation testing with tempest In-Reply-To: References: , Message-ID: Hi, Some update on the open issues: - Make this work 😉: Okay, I realized, that I use the wrong certificate in the Idp. Based on the IdP-s description I should generate a p12 certificate using the certificate and the key used by the Shibboleth Sp. When I try to generate the certificate I get a strange error: openssl pkcs12 -inkey /etc/shibboleth/sp-key.pem -in /etc/shibboleth/sp-cert.pem -out ../keystone-shibboleth-idp-dockerized/shibboleth-idp/credentials/idp-browser.p12 139822780584384:error:0D0680A8:asn1 encoding routines:asn1_check_tlen:wrong tag:../crypto/asn1/tasn_dec.c:1129: 139822780584384:error:0D07803A:asn1 encoding routines:asn1_item_embed_d2i:nested asn1 error:../crypto/asn1/tasn_dec.c:289:Type=PKCS12 Google tells me that this is becouse one of my pem files are in the wrong format. This is really strange as the error persist even after I regenerated these files with shib-keygen -f -y 1 - Figure out how to start a Container in a Keystone plugin or a tempest plugin - no progress on this - Figure out ow to integrate with CI - no progress on this - Figure out how to use static certificates and keys, so the same IdP container image can be used. If you are bigger fans of IRC than email I can start sending these updates to the keystone channel. Br, Gerg0 ________________________________ From: Csatari, Gergely (Nokia - HU/Budapest) Sent: Wednesday, September 12, 2018 11:25:32 PM To: nick at stackhpc.com; knikolla at bu.edu; colleen at gazlene.net; mbuil at suse.com; edge-computing at lists.openstack.org Subject: Re: Status of Keystone federation testing with tempest Hi, Some update on the open issues. - Make this work 😉 - here I have some progress, however I can not explain why. Now keystone is able to reach Shibboleth and Shibboleth answers with FatalProfileException "A valid authentication statement was not found in the incoming message.". I continue to figure out what is the problem. - Set the idp address in the correct place - This is done thanks to gmann. - Figure out how to start a Container in a Keystone plugin or a tempest plugin - Here I try to use https://github.com/openstack/devstack-plugin-container however I'm not sure if this is the right tool to start containers in DevStack environment. - Figure out ow to integrate with CI - no progress on this I'm still happy get any help either in mail, IRC or in person on the PTG. Thanks, Gerg0 ________________________________ From: Csatari, Gergely (Nokia - HU/Budapest) Sent: Friday, August 31, 2018 1:03:43 PM To: nick at stackhpc.com; knikolla at bu.edu; colleen at gazlene.net; mbuil at suse.com; edge-computing at lists.openstack.org Subject: Status of Keystone federation testing with tempest Hi, I'm working on this for a while, but as I am not a big expert of IdP, Keysone or Tempest I have a bit slow progress. I decided to share what I did and what are my current probelms to 1) inform the team about the progress 2) keep a record for myself 3) hoping for help and/or hints. So I did this: 1) Get an Ubuntu 2) Install devstack with enable_plugin keystone git://git.openstack.org/openstack/keystone enable_service keystone-saml2-federation Here I already ran into some package maangement issues due to some libcurl3 and libcurl4 incompatibility issue what I solved using https://launchpad.net/~xapienz/+archive/ubuntu/curl34 3) Install the Keystone tempest plugin 4) Build a Shibboleth IdP container based on https://github.com/Unicon/shibboleth-idp-dockerized with the configuration I believe is correct. I have a feeling that we will need to set a proper organisation for this if we want to publish this to Docker Hub. By the way is there a container registry maintained in the OpenStack development infra? 5) Run the container and expose 8080, 4443 and 8443 ports This is a half success. Shibboleth contacts Keystone (or actually the Shibboleth apache module) for metadata update, but it works only on the first attempt. The regular updates are not working for some reason. Also I was not able to get a positive answer from the status script of Shibboleth itself, so i just decided to move a bit forward. 6) Set idp_url to https://localhost:8080/idp/profile/SAML2/SOAP/ECP in _request_unscoped_token inside the Keystone tempest plugin. Here I have no idea where the configuration is actually stores and where should I set this in a nice way. 7) Run the tempest tests. Now here I get an error message which tells me about SSL version numbers (hands.hake: Error([('SSL routines', 'ssl3_get_record', 'wrong version number')],)",),))). I tried to use different ssl versions with Curl, but it complains about the lack of support in libsso. So here I am now. Things what I deffinetly should figure out: - Make this work 😉 - Set the idp address in the correct place - Figure out how to start a Container in a Keystone plugin or a tempest plugin - Figure out ow to integrate with CI Any comments are welcome. Br, Gerg0 Curl 3 and 4 : Evgeny Brazgin - launchpad.net launchpad.net PPA contains libcurl4 package, which supports both libcurl3 and libcurl4 API. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko at openstack.org Tue Sep 18 14:04:52 2018 From: ildiko at openstack.org (Ildiko Vancsa) Date: Tue, 18 Sep 2018 16:04:52 +0200 Subject: [Edge-computing] Weekly call is on now! Message-ID: Hi, We have the weekly call running now if anyone is around! :) https://wiki.openstack.org/wiki/Edge_Computing_Group#Agenda Thanks, Ildikó From ildiko at openstack.org Fri Sep 21 11:38:31 2018 From: ildiko at openstack.org (Ildiko Vancsa) Date: Fri, 21 Sep 2018 13:38:31 +0200 Subject: [Edge-computing] OSF Edge Computing Group BoF at ONS EU at the time of the Tuesday call Message-ID: <77396A11-D5B7-4832-9DD5-D4CD661C4D2E@openstack.org> Hi, As a few of us will be at ONS EU next week the weekly working group call on Tuesday (September 25) will be “co-located” with a BoF session around Edge at ONS EU in Amsterdam. The session’s purpose is to explore use cases and current ongoing design, development and testing activities under the OpenStack Foundation umbrella, OPNFV, ONAP, Akraino and so forth and to identify next steps. You can find more details about the session here: https://onseu18.sched.com/event/GZqk/edge-computing-group-at-ons-europe I will setup Zoom just like for the weekly call with the usual link (https://zoom.us/j/879678938) we use every week. Please let me know if you have any questions. See you next week in Amsterdam or on the Zoom call! :) Thanks and Best Regards, Ildikó From ildiko at openstack.org Mon Sep 24 20:03:46 2018 From: ildiko at openstack.org (Ildiko Vancsa) Date: Mon, 24 Sep 2018 22:03:46 +0200 Subject: [Edge-computing] Use cases call now Message-ID: <6E93E8DC-0629-4755-84B5-660EEBC6E083@openstack.org> Hi, Zoom acts a little weird today and I may have ended and then restarted the Use Cases call a few minutes ago. Anyone around to join? Thanks, Ildikó From praymond at ieee.org Mon Sep 24 20:08:13 2018 From: praymond at ieee.org (=?utf-8?Q?Paul-Andr=C3=A9_Raymond?=) Date: Mon, 24 Sep 2018 16:08:13 -0400 Subject: [Edge-computing] Use cases call now In-Reply-To: <6E93E8DC-0629-4755-84B5-660EEBC6E083@openstack.org> References: <6E93E8DC-0629-4755-84B5-660EEBC6E083@openstack.org> Message-ID: <7621A499-2459-4388-8311-4BFB320260D9@ieee.org> I am joining now. Paul-Andre > On Sep 24, 2018, at 4:03 PM, Ildiko Vancsa wrote: > > Hi, > > Zoom acts a little weird today and I may have ended and then restarted the Use Cases call a few minutes ago. Anyone around to join? > > Thanks, > Ildikó > > > > _______________________________________________ > Edge-computing mailing list > Edge-computing at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing From ildiko at openstack.org Tue Sep 25 09:21:39 2018 From: ildiko at openstack.org (Ildiko Vancsa) Date: Tue, 25 Sep 2018 11:21:39 +0200 Subject: [Edge-computing] PTG summary on edge discussions Message-ID: <387330DB-98C0-4BA8-9428-E5A5959A9E33@openstack.org> Hi, Hereby I would like to give you a short summary on the discussions that happened at the PTG in the area of edge. The Edge Computing Group sessions took place on Tuesday where our main activity was to draw an overall architecture diagram to capture the basic setup and requirements of edge towards a set of OpenStack services. Our main and initial focus was around Keystone and Glance, but discussion with other project teams such as Nova, Ironic and Cinder also happened later during the week. The edge architecture diagrams we drew are part of a so called Minimum Viable Product (MVP) which refers to the minimalist nature of the setup where we didn’t try to cover all aspects but rather define a minimum set of services and requirements to get to a functional system. This architecture will evolve further as we collect more use cases and requirements. To describe edge use cases on a higher level with Mobile Edge as a use case in the background we identified three main building blocks: * Main or Regional Datacenter (DC) * Edge Sites * Far Edge Sites or Cloudlets We examined the architecture diagram with the following user stories in mind: * As a deployer of OpenStack I want to minimize the number of control planes I need to manage across a large geographical region. * As a user of OpenStack I expect instance autoscale continues to function in an edge site if connectivity is lost to the main datacenter. * As a deployer of OpenStack I want disk images to be pulled to a cluster on demand, without needing to sync every disk image everywhere. * As a user of OpenStack I want to manage all of my instances in a region (from regional DC to far edge cloudlets) via a single API endpoint. We concluded to talk about service requirements in two major categories: 1. The Edge sites are fully operational in case of a connection loss between the Regional DC and the Edge site which requires control plane services running on the Edge site 2. Having full control on the Edge site is not critical in case a connection loss between the Regional DC and an Edge site which can be satisfied by having the control plane services running only in the Regional DC In the first case the orchestration of the services becomes harder and is not necessarily solved yet, while in the second case you have centralized control but losing functionality on the Edge sites in the event of a connection loss. We did not discuss things such as HA at the PTG and we did not go into details on networking during the architectural discussion either. We agreed to prefer federation for Keystone and came up with two work items to cover missing functionality: * Keystone to trust a token from an ID Provider master and when the auth method is called, perform an idempotent creation of the user, project and role assignments according to the assertions made in the token * Keystone should support the creation of users and projects with predictable UUIDs (eg.: hash of the name of the users and projects). This greatly simplifies Image federation and telemetry gathering For Glance we explored image caching and spent some time discussing the option to also cache metadata so a user can boot new instances at the edge in case of a network connection loss which would result in being disconnected from the registry: * I as a user of Glance, want to upload an image in the main datacenter and boot that image in an edge datacenter. Fetch the image to the edge datacenter with its metadata We are still in the progress of documenting the discussions and draw the architecture diagrams and flows for Keystone and Glance. In addition to the above we went through Dublin PTG wiki (https://wiki.openstack.org/wiki/OpenStack_Edge_Discussions_Dublin_PTG) capturing requirements: * we agreed to consider the list of requirements on the wiki finalized for now * agreed to move there the additional requirements listed on the Use Cases (https://wiki.openstack.org/wiki/Edge_Computing_Group/Use_Cases) wiki page For the details on the discussions with related OpenStack projects you can check the following etherpads for notes: * Cinder: https://etherpad.openstack.org/p/cinder-ptg-planning-denver-9-2018 * Glance: https://etherpad.openstack.org/p/glance-stein-edge-architecture * Ironic: https://etherpad.openstack.org/p/ironic-stein-ptg-edge * Keystone: https://etherpad.openstack.org/p/keystone-stein-edge-architecture * Neutron: https://etherpad.openstack.org/p/neutron-stein-ptg * Nova: https://etherpad.openstack.org/p/nova-ptg-stein Notes from the StarlingX sessions: https://etherpad.openstack.org/p/stx-PTG-agenda We are still working on the MVP architecture to clean it up and discuss comments and questions before moving it to a wiki page. Please let me know if you would like to get access to the document and I will share it with you. Please let me know if you have any questions or comments to the above captured items. Thanks and Best Regards, Ildikó (IRC: ildikov) From zhaoqihui at chinamobile.com Tue Sep 25 10:11:04 2018 From: zhaoqihui at chinamobile.com (zhaoqihui at chinamobile.com) Date: Tue, 25 Sep 2018 18:11:04 +0800 Subject: [Edge-computing] [opnfv-tech-discussion][Edge Cloud] 2018 ONS Summit Keystone demo -- Look forward to seeing you Message-ID: <041d01d454b8$167bba30$43732e90$@chinamobile.com> Hi Edge Cloud team and Edge Computing team, We are going to have our keystone federation demo in the demo booth. Booth title: “OPNFV Testing for Open Infrastructure Federation” Location: the fourth one on the right side of the booth entrance You are welcome to have a small talk if you pass by. Look forward to say hi to you guys~~ Best, Qihui -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Tue Sep 25 12:41:05 2018 From: Arkady.Kanevsky at dell.com (Arkady.Kanevsky at dell.com) Date: Tue, 25 Sep 2018 12:41:05 +0000 Subject: [Edge-computing] PTG summary on edge discussions In-Reply-To: <387330DB-98C0-4BA8-9428-E5A5959A9E33@openstack.org> References: <387330DB-98C0-4BA8-9428-E5A5959A9E33@openstack.org> Message-ID: <4c02ae05b4bf41e993f6bf5b44b35883@AUSX13MPS308.AMER.DELL.COM> Great notes. Thanks Ildiko. -----Original Message----- From: Ildiko Vancsa [mailto:ildiko at openstack.org] Sent: Tuesday, September 25, 2018 4:22 AM To: edge-computing Subject: [Edge-computing] PTG summary on edge discussions [EXTERNAL EMAIL] Please report any suspicious attachments, links, or requests for sensitive information. Hi, Hereby I would like to give you a short summary on the discussions that happened at the PTG in the area of edge. The Edge Computing Group sessions took place on Tuesday where our main activity was to draw an overall architecture diagram to capture the basic setup and requirements of edge towards a set of OpenStack services. Our main and initial focus was around Keystone and Glance, but discussion with other project teams such as Nova, Ironic and Cinder also happened later during the week. The edge architecture diagrams we drew are part of a so called Minimum Viable Product (MVP) which refers to the minimalist nature of the setup where we didn’t try to cover all aspects but rather define a minimum set of services and requirements to get to a functional system. This architecture will evolve further as we collect more use cases and requirements. To describe edge use cases on a higher level with Mobile Edge as a use case in the background we identified three main building blocks: * Main or Regional Datacenter (DC) * Edge Sites * Far Edge Sites or Cloudlets We examined the architecture diagram with the following user stories in mind: * As a deployer of OpenStack I want to minimize the number of control planes I need to manage across a large geographical region. * As a user of OpenStack I expect instance autoscale continues to function in an edge site if connectivity is lost to the main datacenter. * As a deployer of OpenStack I want disk images to be pulled to a cluster on demand, without needing to sync every disk image everywhere. * As a user of OpenStack I want to manage all of my instances in a region (from regional DC to far edge cloudlets) via a single API endpoint. We concluded to talk about service requirements in two major categories: 1. The Edge sites are fully operational in case of a connection loss between the Regional DC and the Edge site which requires control plane services running on the Edge site 2. Having full control on the Edge site is not critical in case a connection loss between the Regional DC and an Edge site which can be satisfied by having the control plane services running only in the Regional DC In the first case the orchestration of the services becomes harder and is not necessarily solved yet, while in the second case you have centralized control but losing functionality on the Edge sites in the event of a connection loss. We did not discuss things such as HA at the PTG and we did not go into details on networking during the architectural discussion either. We agreed to prefer federation for Keystone and came up with two work items to cover missing functionality: * Keystone to trust a token from an ID Provider master and when the auth method is called, perform an idempotent creation of the user, project and role assignments according to the assertions made in the token * Keystone should support the creation of users and projects with predictable UUIDs (eg.: hash of the name of the users and projects). This greatly simplifies Image federation and telemetry gathering For Glance we explored image caching and spent some time discussing the option to also cache metadata so a user can boot new instances at the edge in case of a network connection loss which would result in being disconnected from the registry: * I as a user of Glance, want to upload an image in the main datacenter and boot that image in an edge datacenter. Fetch the image to the edge datacenter with its metadata We are still in the progress of documenting the discussions and draw the architecture diagrams and flows for Keystone and Glance. In addition to the above we went through Dublin PTG wiki (https://wiki.openstack.org/wiki/OpenStack_Edge_Discussions_Dublin_PTG) capturing requirements: * we agreed to consider the list of requirements on the wiki finalized for now * agreed to move there the additional requirements listed on the Use Cases (https://wiki.openstack.org/wiki/Edge_Computing_Group/Use_Cases) wiki page For the details on the discussions with related OpenStack projects you can check the following etherpads for notes: * Cinder: https://etherpad.openstack.org/p/cinder-ptg-planning-denver-9-2018 * Glance: https://etherpad.openstack.org/p/glance-stein-edge-architecture * Ironic: https://etherpad.openstack.org/p/ironic-stein-ptg-edge * Keystone: https://etherpad.openstack.org/p/keystone-stein-edge-architecture * Neutron: https://etherpad.openstack.org/p/neutron-stein-ptg * Nova: https://etherpad.openstack.org/p/nova-ptg-stein Notes from the StarlingX sessions: https://etherpad.openstack.org/p/stx-PTG-agenda We are still working on the MVP architecture to clean it up and discuss comments and questions before moving it to a wiki page. Please let me know if you would like to get access to the document and I will share it with you. Please let me know if you have any questions or comments to the above captured items. Thanks and Best Regards, Ildikó (IRC: ildikov) _______________________________________________ Edge-computing mailing list Edge-computing at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing From gfidente at redhat.com Wed Sep 26 11:35:47 2018 From: gfidente at redhat.com (Giulio Fidente) Date: Wed, 26 Sep 2018 13:35:47 +0200 Subject: [Edge-computing] PTG summary on edge discussions In-Reply-To: <387330DB-98C0-4BA8-9428-E5A5959A9E33@openstack.org> References: <387330DB-98C0-4BA8-9428-E5A5959A9E33@openstack.org> Message-ID: <9bf01dbc-2385-d860-5b9e-54b2d3ae3fd3@redhat.com> hi, thanks for sharing this! At TripleO we're looking at implementing in Stein deployment of at least 1 regional DC and N edge zones. More comments below. On 9/25/18 11:21 AM, Ildiko Vancsa wrote: > Hi, > > Hereby I would like to give you a short summary on the discussions that happened at the PTG in the area of edge. > > The Edge Computing Group sessions took place on Tuesday where our main activity was to draw an overall architecture diagram to capture the basic setup and requirements of edge towards a set of OpenStack services. Our main and initial focus was around Keystone and Glance, but discussion with other project teams such as Nova, Ironic and Cinder also happened later during the week. > > The edge architecture diagrams we drew are part of a so called Minimum Viable Product (MVP) which refers to the minimalist nature of the setup where we didn’t try to cover all aspects but rather define a minimum set of services and requirements to get to a functional system. This architecture will evolve further as we collect more use cases and requirements. > > To describe edge use cases on a higher level with Mobile Edge as a use case in the background we identified three main building blocks: > > * Main or Regional Datacenter (DC) > * Edge Sites > * Far Edge Sites or Cloudlets > > We examined the architecture diagram with the following user stories in mind: > > * As a deployer of OpenStack I want to minimize the number of control planes I need to manage across a large geographical region. > * As a user of OpenStack I expect instance autoscale continues to function in an edge site if connectivity is lost to the main datacenter. > * As a deployer of OpenStack I want disk images to be pulled to a cluster on demand, without needing to sync every disk image everywhere. > * As a user of OpenStack I want to manage all of my instances in a region (from regional DC to far edge cloudlets) via a single API endpoint. > > We concluded to talk about service requirements in two major categories: > > 1. The Edge sites are fully operational in case of a connection loss between the Regional DC and the Edge site which requires control plane services running on the Edge site > 2. Having full control on the Edge site is not critical in case a connection loss between the Regional DC and an Edge site which can be satisfied by having the control plane services running only in the Regional DC > > In the first case the orchestration of the services becomes harder and is not necessarily solved yet, while in the second case you have centralized control but losing functionality on the Edge sites in the event of a connection loss. > > We did not discuss things such as HA at the PTG and we did not go into details on networking during the architectural discussion either. while TripleO used to rely on pacemaker to manage cinder-volume A/P in the controlplane, we'd like to push for cinder-volume A/A in the edge zone and avoid the deployment of pacemaker in the edge zones the safety of cinder-volume A/A seems to depend mostly on the backend driver and for RBD we should be good > We agreed to prefer federation for Keystone and came up with two work items to cover missing functionality: > > * Keystone to trust a token from an ID Provider master and when the auth method is called, perform an idempotent creation of the user, project and role assignments according to the assertions made in the token > * Keystone should support the creation of users and projects with predictable UUIDs (eg.: hash of the name of the users and projects). This greatly simplifies Image federation and telemetry gathering > > For Glance we explored image caching and spent some time discussing the option to also cache metadata so a user can boot new instances at the edge in case of a network connection loss which would result in being disconnected from the registry: > > * I as a user of Glance, want to upload an image in the main datacenter and boot that image in an edge datacenter. Fetch the image to the edge datacenter with its metadata > > We are still in the progress of documenting the discussions and draw the architecture diagrams and flows for Keystone and Glance. for glance we'd like to deploy only one glance-api in the regional dc and configure glance/cache in each edge zone ... pointing all instances to a shared database this should solve the metadata problem and also provide for storage "locality" into every edge zone > In addition to the above we went through Dublin PTG wiki (https://wiki.openstack.org/wiki/OpenStack_Edge_Discussions_Dublin_PTG) capturing requirements: > > * we agreed to consider the list of requirements on the wiki finalized for now > * agreed to move there the additional requirements listed on the Use Cases (https://wiki.openstack.org/wiki/Edge_Computing_Group/Use_Cases) wiki page > > For the details on the discussions with related OpenStack projects you can check the following etherpads for notes: > > * Cinder: https://etherpad.openstack.org/p/cinder-ptg-planning-denver-9-2018 > * Glance: https://etherpad.openstack.org/p/glance-stein-edge-architecture > * Ironic: https://etherpad.openstack.org/p/ironic-stein-ptg-edge > * Keystone: https://etherpad.openstack.org/p/keystone-stein-edge-architecture > * Neutron: https://etherpad.openstack.org/p/neutron-stein-ptg > * Nova: https://etherpad.openstack.org/p/nova-ptg-stein > > Notes from the StarlingX sessions: https://etherpad.openstack.org/p/stx-PTG-agenda here is a link to the TripleO edge squad etherpad as well: https://etherpad.openstack.org/p/tripleo-edge-squad-status the edge squad is meeting weekly. > We are still working on the MVP architecture to clean it up and discuss comments and questions before moving it to a wiki page. Please let me know if you would like to get access to the document and I will share it with you. > > Please let me know if you have any questions or comments to the above captured items. thanks again! -- Giulio Fidente GPG KEY: 08D733BA From gergely.csatari at nokia.com Fri Sep 28 09:54:11 2018 From: gergely.csatari at nokia.com (Csatari, Gergely (Nokia - HU/Budapest)) Date: Fri, 28 Sep 2018 09:54:11 +0000 Subject: [Edge-computing] [openstack-dev] [ironic][edge] Notes from the PTG In-Reply-To: <567890A7-3685-4C95-839C-C947AFDC07FB@windriver.com> References: <3A5527BB-7E4E-48DF-9AD1-0D42C64B6106@windriver.com> <567890A7-3685-4C95-839C-C947AFDC07FB@windriver.com> Message-ID: Hi Jim, Thanks for sharing your notes. One note about the jumping automomus control plane requirement. This requirement was already identified during the Dublin PTG workshop [1]. This is needed for two reasons the edge cloud instance should stay operational even if there is a network break towards other edge cloud instances and the edge cloud instance should work together with other edge cloud instances running other version of the control plane. In Denver we deided to leave out these requirements form the MVP architecture discussions. Br, Gerg0 [1]: https://wiki.openstack.org/w/index.php?title=OpenStack_Edge_Discussions_Dublin_PTG From: Jim Rollenhagen > Reply-To: "openstack-dev at lists.openstack.org" > Date: Wednesday, September 19, 2018 at 10:49 AM To: "openstack-dev at lists.openstack.org" > Subject: [openstack-dev] [ironic][edge] Notes from the PTG I wrote up some notes from my perspective at the PTG for some internal teams and figured I may as well share them here. They're primarily from the ironic and edge WG rooms. Fairly raw, very long, but hopefully useful to someone. Enjoy. Tuesday: edge Edge WG (IMHO) has historically just talked about use cases, hand-waved a bit, and jumped to requiring an autonomous control plane per edge site - thus spending all of their time talking about how they will make glance and keystone sync data between control planes. penick described roughly what we do with keystone/athenz and how that can be used in a federated keystone deployment to provide autonomy for any control plane, but also a single view via a global keystone. penick and I both kept pushing for people to define a real architecture, and we ended up with 10-15 people huddled around an easel for most of the afternoon. Of note: - Windriver (and others?) refuse to budge on the many control plane thing - This means that they will need some orchestration tooling up top in the main DC / client machines to even come close to reasonably managing all of these sites - They will probably need some syncing tooling - glance->glance isn’t a thing, no matter how many people say it is. - Glance PTL recommends syncing metadata outside of glance process, and a global(ly distributed?) glance backend. - We also defined the single pane of glass architecture that Oath plans to deploy - Okay with losing connectivity from central control plane to single edge site - Each edge site is a cell - Each far edge site is just compute nodes - Still may want to consider image distribution to edge sites so we don’t have to go back to main DC? - Keystone can be distributed the same as first architecture - Nova folks may start investigating putting API hosts at the cell level to get the best of both worlds - if there’s a network partition, can still talk to cell API to manage things - Need to think about removing the need for rabbitmq between edge and far edge - Kafka was suggested in the edge room for oslo.messaging in general - Etcd watchers may be another option for an o.msg driver - Other other options are more invasive into nova - involve changing how nova-compute talks to conductor (etcd, etc) or even putting REST APIs in nova-compute (and nova-conductor?) - Neutron is going to work on an OVS “superagent” - superagent does the RPC handling, talks some other way to child agents. Intended to scale to thousands of children. Primary use case is smart nics but seems like a win for the edge case as well. penick took an action item to draw up the architecture diagrams in a digestable format. Wednesday: ironic things Started with a retrospective. See https://etherpad.openstack.org/p/ironic-stein-ptg-retrospective for the notes - there wasn’t many surprising things here. We did discuss trying to target some quick wins for the beginning of the cycle, so that we didn’t have all of our features trying to land at the end. Using wsgi with the ironic-api was mentioned as a potential regression, but we agreed it’s a config/documentation issue. I took an action to make a task to document this better. Next we quickly reviewed our vision doc, and people didn’t have much to say about it. Metalsmith: it’s a thing, it’s being included into the ironic project. Dmitry is open to optionally supporting placement. Multiple instances will be a feature in the future. Otherwise mostly feature complete, goal is to keep it simple. Networking-ansible: redhat building tooling that integrates with upstream ansible modules for networking gear. Kind of an alternative to n-g-s. Not really much on plans here, RH just wanted to introduce it to the community. Some discussion about it possibly replacing n-g-s later, but no hard plans. Deploy steps/templates: we talked about what the next steps are, and what an MVP looks like. Deploy templates are triggered by the traits that nodes are scheduled against, and can add steps before or after (or in between?) the default deploy steps. We agreed that we should add a RAID deploy step, with standing questions for how arguments are passed to that deploy step, and what the defaults look like. Myself and mgoddard took an action item to open an RFE for this. We also agreed that we should start thinking about how the current (only) deploy step should be split into multiple steps. Graphical console: we discussed what the next steps are for this work. We agreed that we should document the interface and what is returned (a URL), and also start working on a redfish driver for graphical consoles. We also noted that we can test in the gate with qemu, but we only need to test that a correct URL is returned, not that the console actually works (because we don’t really care that qemu’s console works). Python 3: we talked about the changes to our jobs that are needed. We agreed to use the base name of the jobs for Python 3 (as those will be used for a long time), and add a “python2” prefix for the Python 2 jobs. We also discussed dropping certain coverage for Python 2, as our CI jobs tend to mostly test the same codepaths with some config differences. Last, we talked about mixed environment Python 2 and 3 testing, as this will be a thing people doing rolling upgrades of Python versions will hit. I sent an email to the ML asking if others had done or thought about this, and it sounds like we can limit that testing to oslo.messaging, and a task was reported there. Pre-upgrade checks: Not much was discussed here; TheJulia is going to look into it. One item of note is that there is an oslo project being proposed that can carry some of the common code for this. Performance improvements: We first discussed our virt driver’s performance. It was found that Nova’s power sync loop makes a call to Ironic for each instance that the compute service is managing. We do some node caching in our driver that would be useful for this. I took an action item to look into it, and have a WIP patch: https://review.openstack.org/#/c/602127/ . That patch just needs a bug filed and unit tests written. On Thursday, we talked with Nova about other performance things, and agreed we should implement a hook in Nova that Ironic can do to say “power changed” and “deploy done” and other things like this. This will help reduce or eliminate polling from our virt driver to Ironic, and also allow Nova to notice these changes faster. More on that later? Splitting the conductor: we discussed the many tasks the conductor is responsible for, and pondered if we could or should split things up. This has implications (good and bad) for operability, scalability, and security. Splitting the conductor to multiple workers would allow operators to use different security models for different tasks (e.g. only allowing an “OOB worker” access to the OOB network). It would also allow folks to scale out workers that do lots of work (like the power status loop) separately from those that do minimal work (writing PXE configs). I intend to investigate this more during this cycle and lay out a plan for doing the work. This also may require better distributed locking, which TheJulia has started investigating. Changing boot mode defaults: Apparently Intel is going to stop shipping hardware that is capable of legacy BIOS booting in 2020. We agreed that we should work toward changing the default boot mode to UEFI to better prepare our users, but we can’t drop legacy BIOS mode until all of the old hardware in the world is gone. TheJulia is going to dig through the code and make a task list. UEFI HTTPClient booting: This is a DHCP class that allows the DHCP server to return a URL instead of a “next-server” (TFTP location) response. This is a clear value add, and TheJulia is going to work on it as she is already neck deep in that area of code. We also need to ensure that Neutron supports this. It should, as it’s just more DHCP options, but we need to verify. SecureBoot: I presented Oath’s secureboot model, which doesn’t depend on a centralized attestation server. It made sense to people, and we discussed putting the driver in tree. The process does rely on some enhancements to iPXE, so Oath is going to investigate upstreaming those changes and publishing more documentation, and then an in-tree driver should be no problem. We also discussed Ironic’s current SecureBoot (TrustedBoot?) implementations. Currently it only works with PXE, not iPXE or Grub2. TheJulia is going to look into adding this support. We should be able to do CI jobs for it, as TPM 1.2 and 2.0 emulation both seem to be supported in QEMU as of 2.11. NIC PXE configuration as a clean step: the DRAC driver team has a desire to configure NICs for PXE or not, and sync with the ironic database’s pxe_enabled field. This has gone back and forth in IRC. We were able to resolve some of the issues with it, and rpioso is going to write a small spec to make sure we get the details right. Thursday: more ironic things Neutron cross-project discussion: we discussed SmartNICs, which the Neutron team had also discussed the previous day. In short, SmartNICs are NICs that run OVS. The Neutron team discussed the scalability of their OVS agent running across thousands of machines, and are planning to make some sort of “superagent”. This superagent essentially owns a group of OVS agents. It will talk to Neutron over rabbit as usual, but then use some other protocol to talk to the OVS agents it is managing. This should help with rabbit load even in “standard” Openstack environments, and is especially useful (to me) for minimizing rabbitmq connections from far edge sites. The catch with SmartNICs and Ironic is that the NICs must have power to be configured (and thus the machine must be on). This breaks our general model of “only configure networking with the machine off, to make sure we don’t cross streams between tenants and control plane”. We came to a decent compromise (I think), and agreed to continue in the ironic spec, and revisit the topic in Berlin. Federation: we discussed federation and people seemed interested, however I don’t believe we made any real progress toward getting it done. There’s still a debate whether this should be something in Ironic itself, or if there should just be some sort of proxy layer in front of multiple Ironic environments. To be continued in the spec. Agent polling: we discussed the spec to drop communication from IPA to the conductor. It seems like nobody has major issues with it, and the spec just needs some polishing before landing. L3 deployments: We brought this up, and again there seems to be little contention. I ended up approving the spec shortly after. Neutron event processing: This work has been hanging for years and not getting done. Some folks wondered if we should just poll Neutron, if that gets the work done more quickly. Others wondered if we should even care about it at all (we should). TheJulia is going to follow up with dtantsur and vdrok to see if we can get someone to mainline some caffeine and just get it done. CMDB: Oath and CERN presented their work toward speccing out a CMDB application that can integrate with Ironic. We discussed the problems that they are trying to solve and agreed they need solving. We also agreed that strict schema is better than blobjects (© jaypipes). We agreed it probably doesn’t need to be in Ironic governance, but could be one day. The next steps are to start hacking in a new repo in the OpenStack infrastructure, and propose specs for any Ironic integration that is needed. Red Hat and Dell contributors also showed interest in the project and volunteered to help. Some folks are going to try and talk to the wider OpenStack community to find out if there’s interest or needs from projects like Nova/Neutron/Cinder, etc. Stein goals: We put together a list of goals and voted on them. Julia has since proposed the patch to document them: https://review.openstack.org/#/c/603161/ Last thing Thursday: Cross-project discussions with Nova. Summarized here, but lots of detail in the etherpad under the Ironic section: https://etherpad.openstack.org/p/nova-ptg-stein Power sync: We discussed some problems CERN has with the instance power sync (Rackspace also saw these problems). In short, nova asserts power state if the instance “should” be off but the power is turned on out-of-band. Operators definitely need to be aware of this when doing maintenance on active machines, but we also discussed Ironic calling back to Nova when Ironic knows that the power state has been updated (via Ironic API, etc). I volunteered to look at this, and dansmith volunteered to help out. API heaviness: We discussed how many API calls our virt driver does. As mentioned earlier, I proposed a patch to make the power sync loop more lightweight. There’s also lots of polling for tasks like deploy and rescue, which we can dramatically reduce with a callback from Ironic to Nova. I also volunteered to investigate this, and dansmith again agreed to help. Compute host grouping: Ironic now has a mechanism for grouping conductors to nodes, and we want to mirror that in Nova. We discussed how to take the group as a config option and be able to find the other compute services managing that group, so we can build the hash ring correctly. We concluded that it’s a really hard problem (TM), and agreed to also add a config option like “peer_list” that can be used to list other compute services in the same group. This can be read dynamically each time we build the hash ring, or can be a mutable config with updates triggered by a SIGHUP. We’ll hash out the details in a blueprint or spec. Again, I agreed to begin the work, and dansmith agreed to help. Capabilities filter: This was the last topic. It’s been on the chopping block for ages, but we are just now reaching the point where it can be properly deprecated. We discussed the plan, and mostly agreed it was good enough. johnthetubaguy is going to send the plan wider and make sure it will work for folks. We also discussed modeling countable resources on Ironic resource providers, which will work as long as there is still some resource class with an inventory of one, like we have today. Some folks may investigate doing this, but it’s fuzzy how much people care or if we really need/want to do it. Friday: kind of bummed around the Ironic and TC rooms. Lots of interesting discussions, but nothing I feel like writing about here (as Ironic conversations were things like code deep-dives not worth communicating widely, and the TC topics have been written about to death). // jim -------------- next part -------------- An HTML attachment was scrubbed... URL: