From ildiko at openstack.org Mon Jun 3 13:12:54 2019 From: ildiko at openstack.org (Ildiko Vancsa) Date: Mon, 3 Jun 2019 15:12:54 +0200 Subject: [Edge-computing] Use-cases bi-weekly calls - next one is today Message-ID: <72680968-2C5E-4AB8-B09B-B7B449F2A1C9@openstack.org> Hi, It is a friendly reminder that we are having the next use cases call today. You can find the meeting details here: https://wiki.openstack.org/wiki/Edge_Computing_Group#Use_cases The calendar file is available here: https://object-storage-ca-ymq-1.vexxhost.net/swift/v1/6e4619c416ff4bd19e1c087f27a43eea/www-assets-prod/edge/OSF-Edge-WG-Use-Cases-Weekly-Calls.ics Thanks, Ildikó From Greg.Waines at windriver.com Tue Jun 4 10:43:51 2019 From: Greg.Waines at windriver.com (Waines, Greg) Date: Tue, 4 Jun 2019 10:43:51 +0000 Subject: [Edge-computing] Lab requirements collection In-Reply-To: <9F42D42F-8BCF-4437-B026-CD102212AB33@windriver.com> References: <9F42D42F-8BCF-4437-B026-CD102212AB33@windriver.com> Message-ID: DOH ... something came up and I cannot make the meeting today. I will send out an email today on status of this work. Greg. From: Greg Waines Date: Friday, May 31, 2019 at 6:47 AM To: "ANDREAS.FLORATH at TELEKOM.DE" , "gergely.csatari at nokia.com" , "edge-computing at lists.openstack.org" Cc: "matthias.britsch at telekom.de" Subject: Re: [Edge-computing] Lab requirements collection Agreed I did volunteer. I can put something together for next week’s meeting. Greg. From: "ANDREAS.FLORATH at TELEKOM.DE" Date: Friday, May 31, 2019 at 5:58 AM To: "gergely.csatari at nokia.com" , "edge-computing at lists.openstack.org" Cc: "matthias.britsch at telekom.de" Subject: Re: [Edge-computing] Lab requirements collection Hello! We are also waiting that somebody asks for hardware ;-) IMHO Greg volunteered to collect requirements: https://etherpad.openstack.org/p/edge-wg-ptg-preparation-denver-2019 > ACTION(gwaines): Put together requirements and start collecting an inventory of hardware that can be used for a testing lab > Requirements for both distributed and centralized MVPs > Greg Waines, greg.waines at windriver.com, GregWaines, in person Kind regards Andre ________________________________ From: Csatari, Gergely (Nokia - HU/Budapest) Sent: Tuesday, May 28, 2019 17:05 To: edge-computing at lists.openstack.org Subject: [Edge-computing] Lab requirements collection Hi, During the PTG sessions we agreed that we will try to build and verify the minimal reference architectures (formally known as MVP architectures). We also discovered, that we might need to have some hardware for this. Some companies were kind enough to promise some hardware resource for us if we can define the “lab requirements” for these. There was someone in the room who volunteered for this task, but unfortunatelly I forgot the name. Can someone please remind me who was the kind person to volunteer for this task? Thanks, Gerg0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Greg.Waines at windriver.com Tue Jun 4 12:47:56 2019 From: Greg.Waines at windriver.com (Waines, Greg) Date: Tue, 4 Jun 2019 12:47:56 +0000 Subject: [Edge-computing] Lab requirements collection In-Reply-To: References: <9F42D42F-8BCF-4437-B026-CD102212AB33@windriver.com> Message-ID: ACTION(gwaines): Put together requirements and start collecting an inventory of hardware that can be used for a testing lab Requirements for both distributed and centralized MVPs Here’s an update on where I am on this: For Testing DISTRIBUTED MVP: OpenStack Deployment: StarlingX Distributed Cloud Hardware Node Types & Numbers: · Central Cloud o 2x Controllers · Subcloud #1 o 1x All-In-One Simplex Deployment · Subcloud #2 o 2x Controllers o 2x Computes Hardware Details: · Controller o Minimum Processor Class: Dual-CPU Intel® Xeon® E5 26xx Family (SandyBridge) 8 cores/socket o Minimum Memory: 64G o Minimum Disks: § Primary Disk: 500GB SSD § Secondary Disk: 500GB HD (min. 10K RPM) o 2x Physical or Logical (VLAN) Ports § OAM/API interface – 1GE or 10GE § MGMT interface – 10GE · Compute o Minimum Processor Class: Dual-CPU Intel® Xeon® E5 26xx Family (SandyBridge) 8 cores/socket o Minimum Memory: 32G o Minimum Disks: § Primary Disk: 500GB SSD § Secondary Disk: 500GB HD (min. 10K RPM) o 2x Physical Ports § MGMT interface – 1 GE or 10GE § DATA interface – 10GE · All-in-One Simplex o Minimum Processor Class: Dual-CPU Intel® Xeon® E5 26xx Family (SandyBridge) 8 cores/socket o Minimum Memory: 64G o Minimum Disks: § Primary Disk: 500GB SSD o 2x Physical Ports § OAM/API interface – 1GE § DATA interface – 10GE Hardware Deployment OPTIONS: · https://www.packet.com o Packet has a bare-metal cloud solution o Packet and Starlingx Project are working together on a joint activity o Packet has donated a number of $$/month of resources for StarlingX Project to use o Curtis Collicut (interdynamix.com), David Paterson (Dell) and myself are currently setting this up now § Working thru some basic details on running StarlingX on Packet e.g. · Setting up IP NAT/Router on StarlingX’s L2 OAM Network, · Enabling Packet servers to PXE boot from another Packet server. · https://www.telekom.com/en o At Edge Computing Group PTG meeting in Denver, Deutche Telecom offered hardware for this activity. For Testing CENTRALIZED MVP: OpenStack Deployment: Nova Cell-based Deployment Hardware Node Types & Numbers: · Central Cloud o 1x Controller · Subcloud #1 <-- Configured as NOVA CELL (Details TBD) o 1x Controller o 1x Compute · Subcloud #2 <-- Configured as NOVA CELL (Details TBD) o 1x Controller o 2x Computes Hardware Details: · Controller o TBD · Compute o TBD Hardware Deployment OPTIONS: · https://www.packet.com o ... suspect we don’t have enough $$/month to do both deployments at the same time on packet · https://www.telekom.com/en o Possibly look at setting this MVP Architecture option up on Deutche Telekom Greg. From: Greg Waines Date: Tuesday, June 4, 2019 at 6:43 AM To: "ANDREAS.FLORATH at TELEKOM.DE" , "gergely.csatari at nokia.com" , "edge-computing at lists.openstack.org" Cc: "matthias.britsch at telekom.de" Subject: Re: [Edge-computing] Lab requirements collection DOH ... something came up and I cannot make the meeting today. I will send out an email today on status of this work. Greg. From: Greg Waines Date: Friday, May 31, 2019 at 6:47 AM To: "ANDREAS.FLORATH at TELEKOM.DE" , "gergely.csatari at nokia.com" , "edge-computing at lists.openstack.org" Cc: "matthias.britsch at telekom.de" Subject: Re: [Edge-computing] Lab requirements collection Agreed I did volunteer. I can put something together for next week’s meeting. Greg. From: "ANDREAS.FLORATH at TELEKOM.DE" Date: Friday, May 31, 2019 at 5:58 AM To: "gergely.csatari at nokia.com" , "edge-computing at lists.openstack.org" Cc: "matthias.britsch at telekom.de" Subject: Re: [Edge-computing] Lab requirements collection Hello! We are also waiting that somebody asks for hardware ;-) IMHO Greg volunteered to collect requirements: https://etherpad.openstack.org/p/edge-wg-ptg-preparation-denver-2019 > ACTION(gwaines): Put together requirements and start collecting an inventory of hardware that can be used for a testing lab > Requirements for both distributed and centralized MVPs > Greg Waines, greg.waines at windriver.com, GregWaines, in person Kind regards Andre ________________________________ From: Csatari, Gergely (Nokia - HU/Budapest) Sent: Tuesday, May 28, 2019 17:05 To: edge-computing at lists.openstack.org Subject: [Edge-computing] Lab requirements collection Hi, During the PTG sessions we agreed that we will try to build and verify the minimal reference architectures (formally known as MVP architectures). We also discovered, that we might need to have some hardware for this. Some companies were kind enough to promise some hardware resource for us if we can define the “lab requirements” for these. There was someone in the room who volunteered for this task, but unfortunatelly I forgot the name. Can someone please remind me who was the kind person to volunteer for this task? Thanks, Gerg0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko at openstack.org Tue Jun 11 18:42:33 2019 From: ildiko at openstack.org (Ildiko Vancsa) Date: Tue, 11 Jun 2019 20:42:33 +0200 Subject: [Edge-computing] Open Infrastructure Summit and PTG Edge overview and next steps Message-ID: <50F07C7D-3C56-465A-8088-BE00AA2F14A8@openstack.org> Hi, There were a lot of interesting discussions about edge computing at the Open Infrastructure Summit[1] and PTG in Denver. Hereby I would like to use the opportunity to share overviews and some progress and next steps the community has taken since. You can find a summary of the Forum discussions here: https://superuser.openstack.org/articles/edge-and-5g-not-just-the-future-but-the-present/ Check the following blog post for a recap on the PTG sessions: https://superuser.openstack.org/articles/edge-computing-takeaways-from-the-project-teams-gathering/ The Edge Computing Group is working towards testing the minimal reference architectures for which we are putting together hardware requirements. You can catch up and chime in on the discussion on this mail thread: http://lists.openstack.org/pipermail/edge-computing/2019-June/000597.html For Ironic related conversations since the event check these threads: * http://lists.openstack.org/pipermail/edge-computing/2019-May/000582.html * http://lists.openstack.org/pipermail/edge-computing/2019-May/000588.html We are also in progress to write up an RFE for Neutron to improve segment range management for edge use cases: http://lists.openstack.org/pipermail/edge-computing/2019-May/000589.html If you have any questions or comments to any of the above topics you can respond to this thread, chime in on the mail above threads, reach out on the edge-computing mailing[2] list or join the weekly edge group calls[3]. If you would like to get involved with StarlingX you can find pointers on the website[4]. Thanks, Ildikó (IRC: ildikov on Freenode) [1] https://www.openstack.org/videos/summits/denver-2019 [2] http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing [3] https://wiki.openstack.org/wiki/Edge_Computing_Group#Meetings [4] https://www.starlingx.io/community/ From zhaoqihui at chinamobile.com Tue Jun 11 19:56:50 2019 From: zhaoqihui at chinamobile.com (zhaoqihui at chinamobile.com) Date: Wed, 12 Jun 2019 03:56:50 +0800 Subject: [Edge-computing] [opnfv-tsc] [opnfv-tech-discuss] [edge cloud] Cancelled Bi-weekly meeting References: <1591D3E0B3F1B31A.20654@lists.opnfv.org>, <201905011058483512433@chinamobile.com> Message-ID: <2019061203564698295813@chinamobile.com> Hello Edge Cloud Team, Today's meeting is cancelled due to the plugfest. There would be an edge session on Thursday, June 13th, at 2:15 PM (GMT+2), if you'd like to join. Topic: https://wiki.lfnetworking.org/display/LN/2019+June+Event+Topic+Proposals#id-2019JuneEventTopicProposals-Aproposalaboutsinglemanagementplatformforedgecloud Zoom link: https://zoom.us/j/367716013 Best, Qihui China Mobile Research Institute (+86) 13810659120 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Greg.Waines at windriver.com Wed Jun 12 11:55:26 2019 From: Greg.Waines at windriver.com (Waines, Greg) Date: Wed, 12 Jun 2019 11:55:26 +0000 Subject: [Edge-computing] Lab requirements collection In-Reply-To: References: <9F42D42F-8BCF-4437-B026-CD102212AB33@windriver.com> Message-ID: DOH ... I was double booked again for meeting this week, apologies. An update on this work: * RE: ‘Distributed MVP’ Edge-Computing Group Architecture TEST ENVIRONMENT SETUP * As mentioned previously, will use OpenStack StarlingX as the solution for the ‘Distributed MVP’-version of the Edge-Computing Group’s MVP Architecture * I have begun to deploy this on https://www.packet.com * i.e. as mentioned Packet has donated resources from its bare metal cloud deployment to the OpenStack Starling project * Curtis C. has allowed me to use these resources for this activity * I have stood up the Central Cloud ... in their New Jersey location * Some minor challenges/issues in doing this on packet.com * Interworking with Packet’s IPXE solution for PXE booting the initial StarlingX Controller took a bit of time to get fully correct. * Packet’s L2 solution seems to be blocking multicast packets * I’ve raised a ticket * I have a workaround as the L2 multicast packets are for local connectivity testing in StarlingX ... so disabling that for now. * Packet’s L2 solution does not have a clean mechanism for interworking with their L3 solution ... or I haven’t figured it out yet * E.g. they conceptually need a gateway router with SNAT ... like in openstack neutron ... * Basically have to do that myself ... and really just did a simple NAT ... but burns a server ($) to do that * Starling to setup 2x small AIO Subclouds next * One in their Amsterdam location and one in their California location * After I get the control plane setup ... I imagine the getting the Data Networks underlying the Neutron Tenant Networks will be a challenge in the Packet L2/L3 solution ... TBD Greg. From: Greg Waines Date: Tuesday, June 4, 2019 at 8:47 AM To: "ANDREAS.FLORATH at TELEKOM.DE" , "gergely.csatari at nokia.com" , "edge-computing at lists.openstack.org" Cc: "matthias.britsch at telekom.de" Subject: Re: [Edge-computing] Lab requirements collection ACTION(gwaines): Put together requirements and start collecting an inventory of hardware that can be used for a testing lab Requirements for both distributed and centralized MVPs Here’s an update on where I am on this: For Testing DISTRIBUTED MVP: OpenStack Deployment: StarlingX Distributed Cloud Hardware Node Types & Numbers: · Central Cloud o 2x Controllers · Subcloud #1 o 1x All-In-One Simplex Deployment · Subcloud #2 o 2x Controllers o 2x Computes Hardware Details: · Controller o Minimum Processor Class: Dual-CPU Intel® Xeon® E5 26xx Family (SandyBridge) 8 cores/socket o Minimum Memory: 64G o Minimum Disks: § Primary Disk: 500GB SSD § Secondary Disk: 500GB HD (min. 10K RPM) o 2x Physical or Logical (VLAN) Ports § OAM/API interface – 1GE or 10GE § MGMT interface – 10GE · Compute o Minimum Processor Class: Dual-CPU Intel® Xeon® E5 26xx Family (SandyBridge) 8 cores/socket o Minimum Memory: 32G o Minimum Disks: § Primary Disk: 500GB SSD § Secondary Disk: 500GB HD (min. 10K RPM) o 2x Physical Ports § MGMT interface – 1 GE or 10GE § DATA interface – 10GE · All-in-One Simplex o Minimum Processor Class: Dual-CPU Intel® Xeon® E5 26xx Family (SandyBridge) 8 cores/socket o Minimum Memory: 64G o Minimum Disks: § Primary Disk: 500GB SSD o 2x Physical Ports § OAM/API interface – 1GE § DATA interface – 10GE Hardware Deployment OPTIONS: · https://www.packet.com o Packet has a bare-metal cloud solution o Packet and Starlingx Project are working together on a joint activity o Packet has donated a number of $$/month of resources for StarlingX Project to use o Curtis Collicut (interdynamix.com), David Paterson (Dell) and myself are currently setting this up now § Working thru some basic details on running StarlingX on Packet e.g. · Setting up IP NAT/Router on StarlingX’s L2 OAM Network, · Enabling Packet servers to PXE boot from another Packet server. · https://www.telekom.com/en o At Edge Computing Group PTG meeting in Denver, Deutche Telecom offered hardware for this activity. For Testing CENTRALIZED MVP: OpenStack Deployment: Nova Cell-based Deployment Hardware Node Types & Numbers: · Central Cloud o 1x Controller · Subcloud #1 <-- Configured as NOVA CELL (Details TBD) o 1x Controller o 1x Compute · Subcloud #2 <-- Configured as NOVA CELL (Details TBD) o 1x Controller o 2x Computes Hardware Details: · Controller o TBD · Compute o TBD Hardware Deployment OPTIONS: · https://www.packet.com o ... suspect we don’t have enough $$/month to do both deployments at the same time on packet · https://www.telekom.com/en o Possibly look at setting this MVP Architecture option up on Deutche Telekom Greg. From: Greg Waines Date: Tuesday, June 4, 2019 at 6:43 AM To: "ANDREAS.FLORATH at TELEKOM.DE" , "gergely.csatari at nokia.com" , "edge-computing at lists.openstack.org" Cc: "matthias.britsch at telekom.de" Subject: Re: [Edge-computing] Lab requirements collection DOH ... something came up and I cannot make the meeting today. I will send out an email today on status of this work. Greg. From: Greg Waines Date: Friday, May 31, 2019 at 6:47 AM To: "ANDREAS.FLORATH at TELEKOM.DE" , "gergely.csatari at nokia.com" , "edge-computing at lists.openstack.org" Cc: "matthias.britsch at telekom.de" Subject: Re: [Edge-computing] Lab requirements collection Agreed I did volunteer. I can put something together for next week’s meeting. Greg. From: "ANDREAS.FLORATH at TELEKOM.DE" Date: Friday, May 31, 2019 at 5:58 AM To: "gergely.csatari at nokia.com" , "edge-computing at lists.openstack.org" Cc: "matthias.britsch at telekom.de" Subject: Re: [Edge-computing] Lab requirements collection Hello! We are also waiting that somebody asks for hardware ;-) IMHO Greg volunteered to collect requirements: https://etherpad.openstack.org/p/edge-wg-ptg-preparation-denver-2019 > ACTION(gwaines): Put together requirements and start collecting an inventory of hardware that can be used for a testing lab > Requirements for both distributed and centralized MVPs > Greg Waines, greg.waines at windriver.com, GregWaines, in person Kind regards Andre ________________________________ From: Csatari, Gergely (Nokia - HU/Budapest) Sent: Tuesday, May 28, 2019 17:05 To: edge-computing at lists.openstack.org Subject: [Edge-computing] Lab requirements collection Hi, During the PTG sessions we agreed that we will try to build and verify the minimal reference architectures (formally known as MVP architectures). We also discovered, that we might need to have some hardware for this. Some companies were kind enough to promise some hardware resource for us if we can define the “lab requirements” for these. There was someone in the room who volunteered for this task, but unfortunatelly I forgot the name. Can someone please remind me who was the kind person to volunteer for this task? Thanks, Gerg0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko at openstack.org Wed Jun 12 12:28:59 2019 From: ildiko at openstack.org (Ildiko Vancsa) Date: Wed, 12 Jun 2019 14:28:59 +0200 Subject: [Edge-computing] CFP reminders Message-ID: <0949C6B4-F6AB-4823-A41F-BC38DC9972BA@openstack.org> Hi, I wanted to draw your attention to a few CFP deadlines that are approaching quickly: * June 16 - ONS Europe - https://events.linuxfoundation.org/events/open-networking-summit-europe-2019/program/cfp/ * June 22 - OpenInfra Days Nordic - https://www.papercall.io/oidn-stockholm-2019 * July 2 - Open Infrastructure Summit Shanghai - https://www.openstack.org/summit/shanghai-2019&_eboga=204325035.1551339844 Beth has already submitted a working group update panel for ONS. To coordinate for Open Infrastructure Summit Shanghai David created an etherpad: https://etherpad.openstack.org/p/2019-shanghai-summit-talk-proposals Please let me know if you need help with your session proposals for either conference. Thanks, Ildikó From ildiko.vancsa at gmail.com Wed Jun 12 16:34:39 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Wed, 12 Jun 2019 18:34:39 +0200 Subject: [Edge-computing] [edge][neutron]: PTG conclusions In-Reply-To: References: Message-ID: <709678B5-6E47-468F-96D5-EAF34FAF30B4@gmail.com> Hi, I got an RFE submitted, it only captures a brief problem statement at this point: https://bugs.launchpad.net/neutron/+bug/1832526 We can have further discussions on the Launchpad page with more eyes on it from the Neutron team. Thanks, Ildikó > On 2019. May 28., at 17:36, Csatari, Gergely (Nokia - HU/Budapest) wrote: > > Hi, > > According to my best memories we agreed on the PTG, that Ian will propose a neutron specification for “Segment ranges in tenant networks configureble by a tenant using an API extension” [1]. > > Do I remember correctly? > > [1]: https://photos.app.goo.gl/hGzBA2Nzu2dfG3if8 > > Thanks, > Gerg0 > _______________________________________________ > Edge-computing mailing list > Edge-computing at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing From ildiko at openstack.org Thu Jun 13 14:12:16 2019 From: ildiko at openstack.org (Ildiko Vancsa) Date: Thu, 13 Jun 2019 16:12:16 +0200 Subject: [Edge-computing] China Mobile Edge platform evaluation presentation next Tuesday Message-ID: Hi, I attended a presentation today from Qihui Zhao about China Mobile’s experience on evaluation different edge deployment models with various software components. As many of the evaluated components are part of OpenStack and/or StarlingX I invited her for next week’s Edge Computing Group call (Tuesday, June 18) to share their findings with the working group and everyone who is interested. For agenda and call details please visit this wiki: https://wiki.openstack.org/wiki/Edge_Computing_Group#Meetings Please let me know if you have any questions. Thanks and Best Regards, Ildikó From zhaoqihui at chinamobile.com Wed Jun 26 07:34:50 2019 From: zhaoqihui at chinamobile.com (zhaoqihui at chinamobile.com) Date: Wed, 26 Jun 2019 15:34:50 +0800 Subject: [Edge-computing] [opnfv-tsc] [opnfv-tech-discuss] [edge cloud] Cancelled Bi-weekly meeting References: <1591D3E0B3F1B31A.20654@lists.opnfv.org>, <201905011058483512433@chinamobile.com>, <15A73D2649D7705B.3974@lists.opnfv.org> Message-ID: <201906261534489875423@chinamobile.com> Hello Edge Cloud Team, As there is no specific topic for today, we'd like to cancel today's meeting. Best, Qihui China Mobile Research Institute (+86) 13810659120 -------------- next part -------------- An HTML attachment was scrubbed... URL: