[Edge-computing] Afterlife of the PTG edge discussions

Jonathan Bryce jonathan at openstack.org
Mon Mar 19 18:20:19 UTC 2018


Hi everyone,

As a reminder we have a call schedule for tomorrow at 0700PDT/0900CDT/1400UTC. I would like to get into a discussion on how to begin implementing a POC for the new sync service project we started discussing in Dublin.

I tried to consolidate the notes that had happened in various places starting on line 91 of https://etherpad.openstack.org/p/edge-alans-problems

If you are interested in helping to kick off development of this project, please join us tomorrow morning: Meeting Link: https://zoom.us/j/777719876

Thanks,

Jonathan



> On Mar 2, 2018, at 6:59 AM, Christopher Price <christopher.price at est.tech> wrote:
> 
> lol, I like it.  If we go with that name I'm using "rex" as an abbreviation.  ;)
> From: Waines, Greg <Greg.Waines at windriver.com>
> Sent: Friday, March 2, 2018 11:50:06 AM
> To: Csatari, Gergely (Nokia - HU/Budapest); lebre.adrien at free.fr; edge-computing at lists.openstack.org
> Subject: Re: [Edge-computing] Afterlife of the PTG edge discussions
>  
> Yeah was some interesting discussions.
>  
> I thought we made the most progress on Tuesday Morning where we worked thru “Alan’s Problems”, and
> were able to put down some clear requirements and even some stakes in the ground wrt the beginnings of a strawman architecture.
>  
> My notes and view of those initial requirements and initial strawman architecture that we discussed are below:
> ( ... feel free to throw harpoons at my interpretation ...)
>  
> First I’d like to suggest that we do the initial work on this New King Bird with the “goal” of NOT changing ANY existing service (like keystone, or glance, or ...).  I think it would allow us to move forward more quickly ... and is not that restrictive wrt being able to meet the requirements below.
>  
> 	• multi-region solution
> 			• i.e. central region and many edge regions
> 			• where region == geographically edge site
> 					• For larger solutions, this would extend to multiple levels,
> e.g. central region à edge region à further-out-edge regions (sites) ...
> 	• scaling up to 1,000s of regions/sites eventually
> 	• edge region/site can be any size, from small 1-node system to 100s-of-nodes system
> 	• a new service runs on central region
> for configuring/synchronizing/queryingSyncStatusOf system-wide data across all edge regions
> 			• referred to in meeting as the "New King Bird" (NKB),
> 			• supports REST API for configuring/queryingSyncStatusOf of system wide data across all regions
> 			• where system-wide data includes:  
> 					• users, tenants, VM images (glance) … as discussed in meeting
> 					• … and, we never discussed but I would also throw in 
> 							• nova flavors (& extra specs), nova key pairs, neutron security groups, and 
> 							• nova and neutron quotas (although that is more complex than simple
> synchronization if you want quota management across the system/multiple-edge-regions)
> 			• will be able to specify sync-policy of 'to ALL edge regions' or 'to specific subset of edge regions',
> for each item of system-wide data,
> 			• synchronization process will retry continuously on failure,
> 			• synchronization process will automatically synch data with any newly created/joined edge region,
> 			• user will be able to query sync state of system-wide data on all/individual edge regions,
> 	• ABC Service in the central region will hold the system-wide ABC data to be synchronized
> 			• e.g. keystone, glance, nova, neutron, …
> 	• New King Bird Service will hold the meta data wrt sync-policy and sync-state for system-wide data
> 	• For a large multi-level region hierarchy,
> the New King Bird service would also run on selected edge clouds that would then sync data to further-out-edge regions/sites.
>  
> And finally, I have to throw out the first new name suggestion for the ‘new king bird’ project,
> how about Tyrannus ( i.e.  a genus of large insect-eating birds, commonly known as kingbirds)?
>  
> let me know what you think,
> Greg.
>  
>  
>  
> From: "Csatari, Gergely (Nokia - HU/Budapest)" <gergely.csatari at nokia.com>
> Date: Wednesday, February 28, 2018 at 1:58 PM
> To: "lebre.adrien at free.fr" <lebre.adrien at free.fr>, "edge-computing at lists.openstack.org" <edge-computing at lists.openstack.org>
> Subject: [Edge-computing] Afterlife of the PTG edge discussions
>  
> Hi,
>  
> Thanks Adrian for facilitating the workshop. It was a very interesting and diverse discussion 😉
>  
> I can help in organizing the notes after the PTG workshop.
>  
> We used 3 etherpads:
> -          PTG schedule: https://etherpad.openstack.org/p/edge-ptg-dublin
> -          Gap analyzis: https://etherpad.openstack.org/p/edge-gap-analysis
> -          Alans problems: https://etherpad.openstack.org/p/edge-alans-problems
>  
> In the PTG schedule and the Gap analyzis we have high level discussions mostly about what we would like to do while in Alans problems we have detailed notes about some parts of how we would like to solve some of the problems.
>  
> I think both of these are valuable information and we should somehow have the following things:
> -          a description about what we would like to achieve. Maybe in some other format than a list of etherpads.
> -          a list of concrete requirements to specific projects (being existing or something new)
> -          maybe some prototypes based on the Tuesday afternoon discussions for the keystone data and image distribution (by the way can someone post the picture of THE PLAN?)
>  
> Any opinions?
>  
> Br,
> Gerg0
> _______________________________________________
> Edge-computing mailing list
> Edge-computing at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing




More information about the Edge-computing mailing list