[Edge-computing] Image handling in edge environment

Bogdan Dobrelya bdobreli at redhat.com
Wed Aug 8 16:09:45 UTC 2018


> -----Original Appointment-----
> From: Csatari, Gergely (Nokia - HU/Budapest)
> Sent: Friday, July 27, 2018 8:58 AM
> To: Csatari, Gergely (Nokia - HU/Budapest); 'edge-computing'
> Cc: Kristi Nikolla; Paul Bankert; David.Paterson at dell.com; Silverman, Ben; Arkady.Kanevsky at dell.com; Srikumar Venugopal; D'ANDREA, JOE (JOE); Giulio Fidente; Shuquan Huang; Jan Walzer; Martin Bäckström; saiyagar at redhat.com; Christopher Price; Beierl, Mark
> Subject: Image handling in edge environment
> When: szerda 2018. augusztus 1 18:00-19:00 (UTC+01:00) Belgrade, Bratislava, Budapest, Ljubljana, Prague.
> Where: webex / #edge-computing-group
> 
> Hi,
> 
> Let's spend this time to discuss the alternatives for Image handling in edge environment listed in here: https://wiki.openstack.org/wiki/Image_handling_in_edge_environment
> 
> 
>   *   Check if the alternatives were captured correctly
>   *   Check the pros and cons
>   *   Check the concerns and questions

I have a few questions (or rather just thoughts) for the "new 
synchronization service" discussed earlier in [0]:

> *   Several Glances with an independent syncronisation service, sych via Glance API [2<https://wiki.openstack.org/wiki/Image_handling_in_edge_environment#Several_Glances_with_an_independent_syncronisation_service.2C_sych_via_Glance_API>]
>      *   Pros:
>         *   Every edge cloud instance can have a different Glance backend
>         *   Can support multiple OpenStack versions in the different edge cloud instances
>         *   Can be extended to support multiple VIM types
>      *   Cons:
>         *   Needs a new synchronisation service
> [Greg] Don’t believe this is a big con ... suspect we are going to need this new synchronization service for synchronizing resources of a number of other openstack services ... not just glance.
> [G0]: I agree, it is not a big con, but it is a con 😊 Should I add some note saying, that a synch service is most probably needed anyway?

... and some concerns for the database replication capabilities to 
support always available autonomy (AAA requirement, further on) of cloud 
instances, which is (IIUC) an ability to make progress (i.e. DB records 
in the local cloud DB), when it is managed locally and is disconnected 
from the central cloud/control plane. Here it is.

We *will* need that new sync service, for any service that is expected 
to sync its state from edge cloud instances to the central cloud of 
clouds and support that AAA requirement. And highly likely a new or 
additional data storage solution for that purpose as well. At very least 
in order to replicate state changes right and resolve conflicts, 
inevitable for the described AAA case.

The issue is mysql/mariadb/galera database transactional consitency 
models [1], like read committed or repeatable read, do not support that 
AAA scenario well. Repeatable read transaction isolation level (TI), and 
stronger  consistency models, are "Unavailable" by design (see [1] for 
the terminology I'm using here). That means it may require the progress 
to be stopped (no writes accepted and, sometimes, no reads) to maintain 
DB consistency for the corner cases, like being partitioned/disconnected 
off the central cloud. But that is a totally accepted situation for AAA, 
and it expects writes to proceed for edge cloud instances!

On the other hand, the read commited transaction isolation level, while 
"Totally available", provides really poor consistency guarantees IMO, 
comparing to the better options (see "Sticky Available").

So I believe that new sync service, backed with another data/KV storage 
solution, must support the best of existing AA consistency models, like 
"Sticky Available" causal/RTC [2],[3]. While galera et al inter-sites 
replication of DB, like it was proposed [4] for Keystone federation 
might not fit the aforementioned consistency requirements/constraints.

Its time to consider tools and options for AAA requirement. WDYT?

[0] 
http://lists.openstack.org/pipermail/edge-computing/2018-July/000322.html
[1] http://jepsen.io/consistency
[2] http://jepsen.io/consistency/models/causal
[3] http://www.cs.cornell.edu/lorenzo/papers/cac-tr.pdf
[4] 
https://wiki.openstack.org/wiki/Keystone_edge_architectures#Keystone_database_replication



>   *   Decide if an alternative is a dead end
> 
> For some strange reasons I do not receive mails from the OpenStack mailing list servers anymore, so if you have anything to discuss about this please use #edge-computing-group or add me directly to the mails.
> 
> Br,
> Gerg0



-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando



More information about the Edge-computing mailing list