[Edge-computing] Lab requirements collection

Javier Rojas Balderrama javier.rojas-balderrama at inria.fr
Tue Sep 17 14:41:00 UTC 2019


Hello all,

Just a reminder that there are some resources already listed in this 
document (section 4):

https://docs.openstack.org/developer/performance-docs/

It looks like the list is not updated. We might also include the specs 
from packet.
Concerning the subsection 4.3, available resources are provided under a 
open access program allowing external users to get an account on that 
platform. You can see the details in the following link and check how it 
fits to the test expected environment setup.

https://docs.openstack.org/developer/performance-docs/labs/grid5000.html

Best,
--
JRB


On 12/06/2019 13:55, Waines, Greg wrote:
> DOH ... I was double booked again for meeting this week, apologies.
> 
> An update on this work:
> 
>   * RE: ‘Distributed MVP’ Edge-Computing Group Architecture TEST
>     ENVIRONMENT SETUP
>       o As mentioned previously, will use OpenStack StarlingX as the
>         solution for the ‘Distributed MVP’-version of the Edge-Computing
>         Group’s MVP Architecture
>       o I have begun to deploy this on https://www.packet.com
>           + i.e. as mentioned Packet has donated resources from its bare
>             metal cloud deployment to the OpenStack Starling project
>           + Curtis C. has allowed me to use these resources for this
>             activity
>       o I have stood up the Central Cloud ... in their New Jersey location
>           + Some minor challenges/issues in doing this on packet.com
>               # Interworking with Packet’s IPXE solution for PXE booting
>                 the initial StarlingX Controller took a bit of time to
>                 get fully correct.
>               # Packet’s L2 solution seems to be blocking multicast packets
>                   * I’ve raised a ticket
>                   * I have a workaround as the L2 multicast packets are
>                     for local connectivity testing in StarlingX ... so
>                     disabling that for now.
>               # Packet’s L2 solution does not have a clean mechanism for
>                 interworking with their L3 solution ... or I haven’t
>                 figured it out yet
>                   * E.g. they conceptually need a gateway router with
>                     SNAT ... like in openstack neutron ...
>                   * Basically have to do that myself ... and really just
>                     did a simple NAT ... but burns a server ($) to do that
>       o Starling to setup 2x small AIO Subclouds next
>           + One in their Amsterdam location and one in their California
>             location
> 
>       o After I get the control plane setup ... I imagine the getting
>         the Data Networks underlying the Neutron Tenant Networks will be
>         a challenge in the Packet L2/L3 solution ... TBD
> 
> Greg.
> 
> *From: *Greg Waines <Greg.Waines at windriver.com>
> *Date: *Tuesday, June 4, 2019 at 8:47 AM
> *To: *"ANDREAS.FLORATH at TELEKOM.DE" <ANDREAS.FLORATH at TELEKOM.DE>, 
> "gergely.csatari at nokia.com" <gergely.csatari at nokia.com>, 
> "edge-computing at lists.openstack.org" <edge-computing at lists.openstack.org>
> *Cc: *"matthias.britsch at telekom.de" <matthias.britsch at telekom.de>
> *Subject: *Re: [Edge-computing] Lab requirements collection
> 
> /ACTION(gwaines): Put together requirements and start collecting an 
> inventory of hardware that can be used for a testing lab
> Requirements for both distributed and centralized MVPs/
> 
> Here’s an update on where I am on this:
> 
> *_For Testing DISTRIBUTED MVP:_*
> 
> OpenStack Deployment:   StarlingX Distributed Cloud
> 
> Hardware Node Types & Numbers:
> 
> ·Central Cloud
> 
> o2x Controllers
> 
> ·Subcloud #1
> 
> o1x All-In-One Simplex Deployment
> 
> ·Subcloud #2
> 
> o2x Controllers
> 
> o2x Computes
> 
> Hardware Details:
> 
> ·Controller
> 
> oMinimum Processor Class:  Dual-CPU Intel® Xeon® E5 26xx Family 
> (SandyBridge) 8 cores/socket
> 
> oMinimum Memory: 64G
> 
> oMinimum Disks:
> 
> §Primary Disk:  500GB SSD
> 
> §Secondary Disk:  500GB HD (min. 10K RPM)
> 
> o2x Physical or Logical (VLAN) Ports
> 
> §OAM/API interface – 1GE or 10GE
> 
> §MGMT interface – 10GE
> 
> ·Compute
> 
> oMinimum Processor Class:  Dual-CPU Intel® Xeon® E5 26xx Family 
> (SandyBridge) 8 cores/socket
> 
> oMinimum Memory: 32G
> 
> oMinimum Disks:
> 
> §Primary Disk:  500GB SSD
> 
> §Secondary Disk:  500GB HD (min. 10K RPM)
> 
> o2x Physical Ports
> 
> §MGMT interface – 1 GE or 10GE
> 
> §DATA interface – 10GE
> 
> ·All-in-One Simplex
> 
> oMinimum Processor Class:  Dual-CPU Intel® Xeon® E5 26xx Family 
> (SandyBridge) 8 cores/socket
> 
> oMinimum Memory: 64G
> 
> oMinimum Disks:
> 
> §Primary Disk:  500GB SSD
> 
> o2x Physical Ports
> 
> §OAM/API interface – 1GE
> 
> §DATA interface – 10GE
> 
> Hardware Deployment OPTIONS:
> 
> ·https://www.packet.com
> 
> oPacket has a bare-metal cloud solution
> 
> oPacket and Starlingx Project are working together on a joint activity
> 
> oPacket has donated a number of $$/month of resources for StarlingX 
> Project to use
> 
> oCurtis Collicut (interdynamix.com), David Paterson (Dell) and myself 
> are currently setting this up now
> 
> §Working thru some basic details on running StarlingX on Packet
> e.g.
> 
> ·Setting up IP NAT/Router on StarlingX’s L2 OAM Network,
> 
> ·Enabling Packet servers to PXE boot from another Packet server.
> 
> ·https://www.telekom.com/en
> 
> oAt Edge Computing Group PTG meeting in Denver, Deutche Telecom offered 
> hardware for this activity.
> 
> *_For Testing CENTRALIZED MVP:_*
> 
> OpenStack Deployment:   Nova Cell-based Deployment
> 
> Hardware Node Types & Numbers:
> 
> ·Central Cloud
> 
> o1x Controller
> 
> ·Subcloud #1 ß Configured as NOVA CELL   (Details TBD)
> 
> o1x Controller
> 
> o1x Compute
> 
> ·Subcloud #2 ß Configured as NOVA CELL   (Details TBD)
> 
> o1x Controller
> 
> o2x Computes
> 
> Hardware Details:
> 
> ·Controller
> 
> oTBD
> 
> ·Compute
> 
> oTBD
> 
> Hardware Deployment OPTIONS:
> 
> ·https://www.packet.com
> 
> o... suspect we don’t have enough $$/month to do both deployments at the 
> same time on packet
> 
> ·https://www.telekom.com/en
> 
> oPossibly look at setting this MVP Architecture option up on Deutche Telekom
> 
> Greg.
> 
> *From: *Greg Waines <Greg.Waines at windriver.com>
> *Date: *Tuesday, June 4, 2019 at 6:43 AM
> *To: *"ANDREAS.FLORATH at TELEKOM.DE" <ANDREAS.FLORATH at TELEKOM.DE>, 
> "gergely.csatari at nokia.com" <gergely.csatari at nokia.com>, 
> "edge-computing at lists.openstack.org" <edge-computing at lists.openstack.org>
> *Cc: *"matthias.britsch at telekom.de" <matthias.britsch at telekom.de>
> *Subject: *Re: [Edge-computing] Lab requirements collection
> 
> DOH ... something came up and I cannot make the meeting today.
> 
> I will send out an email today on status of this work.
> 
> Greg.
> 
> *From: *Greg Waines <Greg.Waines at windriver.com>
> *Date: *Friday, May 31, 2019 at 6:47 AM
> *To: *"ANDREAS.FLORATH at TELEKOM.DE" <ANDREAS.FLORATH at TELEKOM.DE>, 
> "gergely.csatari at nokia.com" <gergely.csatari at nokia.com>, 
> "edge-computing at lists.openstack.org" <edge-computing at lists.openstack.org>
> *Cc: *"matthias.britsch at telekom.de" <matthias.britsch at telekom.de>
> *Subject: *Re: [Edge-computing] Lab requirements collection
> 
> Agreed I did volunteer.
> 
> I can put something together for next week’s meeting.
> 
> Greg.
> 
> *From: *"ANDREAS.FLORATH at TELEKOM.DE" <ANDREAS.FLORATH at TELEKOM.DE>
> *Date: *Friday, May 31, 2019 at 5:58 AM
> *To: *"gergely.csatari at nokia.com" <gergely.csatari at nokia.com>, 
> "edge-computing at lists.openstack.org" <edge-computing at lists.openstack.org>
> *Cc: *"matthias.britsch at telekom.de" <matthias.britsch at telekom.de>
> *Subject: *Re: [Edge-computing] Lab requirements collection
> 
> Hello!
> 
> We are also waiting that somebody asks for hardware ;-)
> 
> IMHO Greg volunteered to collect requirements:
> 
> https://etherpad.openstack.org/p/edge-wg-ptg-preparation-denver-2019
> 
>> ACTION(gwaines): Put together requirements and start collecting an inventory of hardware that can be used for a testing lab
>> Requirements for both distributed and centralized MVPs
> 
>> Greg Waines, greg.waines at windriver.com, GregWaines, in person
> 
> Kind regards
> 
> Andre
> 
> ------------------------------------------------------------------------
> 
> *From:*Csatari, Gergely (Nokia - HU/Budapest) <gergely.csatari at nokia.com>
> *Sent:* Tuesday, May 28, 2019 17:05
> *To:* edge-computing at lists.openstack.org
> *Subject:* [Edge-computing] Lab requirements collection
> 
> Hi,
> 
> During the PTG sessions we agreed that we will try to build and verify 
> the minimal reference architectures (formally known as MVP 
> architectures). We also discovered, that we might need to have some 
> hardware for this. Some companies were kind enough to promise some 
> hardware resource for us if we can define the “lab requirements” for 
> these. There was someone in the room who volunteered for this task, but 
> unfortunatelly I forgot the name.
> 
> Can someone please remind me who was the kind person to volunteer for 
> this task?
> 
> Thanks,
> 
> Gerg0
> 
> 
> _______________________________________________
> Edge-computing mailing list
> Edge-computing at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing
> 



More information about the Edge-computing mailing list