[Edge-computing] [tripleo][FEMDC] IEEE Fog Computing: Call for Contributions - Deadline Approaching

Bogdan Dobrelya bdobreli at redhat.com
Fri Nov 2 16:32:43 UTC 2018

Hello folks.
Here is an update for today. I crated a draft [0], and spend some time 
with building LaTeX with live-updating for the compiled PDF... The 
latter is only informational, if someone wants to contribute, please 
follow the instructions listed by the link (hint: you need no to have 
any LaTeX experience, only basic markdown knowledge should be enough!)


On 10/31/18 6:54 PM, Ildiko Vancsa wrote:
> Hi,
> Thank you for sharing your proposal.
> I think this is a very interesting topic with a list of possible solutions some of which this group is also discussing. It would also be great to learn more about the IEEE activities and have experience about the process in this group on the way forward.
> I personally do not have experience with IEEE conferences, but I’m happy to help with the paper if I can.
> Thanks,
> Ildikó

(added from the parallel thread)
>> On 2018. Oct 31., at 19:11, Mike Bayer <mike_mp at zzzcomputing.com> wrote:
>> On Wed, Oct 31, 2018 at 10:57 AM Bogdan Dobrelya <bdobreli at redhat.com> wrote:
>>> (cross-posting openstack-dev)
>>> Hello.
>>> [tl;dr] I'm looking for co-author(s) to come up with "Edge clouds data
>>> consistency requirements and challenges" a position paper [0] (papers
>>> submitting deadline is Nov 8).
>>> The problem scope is synchronizing control plane and/or
>>> deployments-specific data (not necessary limited to OpenStack) across
>>> remote Edges and central Edge and management site(s). Including the same
>>> aspects for overclouds and undercloud(s), in terms of TripleO; and other
>>> deployment tools of your choice.
>>> Another problem is to not go into different solutions for Edge
>>> deployments management and control planes of edges. And for tenants as
>>> well, if we think of tenants also doing Edge deployments based on Edge
>>> Data Replication as a Service, say for Kubernetes/OpenShift on top of
>>> OpenStack.
>>> So the paper should name the outstanding problems, define data
>>> consistency requirements and pose possible solutions for synchronization
>>> and conflicts resolving. Having maximum autonomy cases supported for
>>> isolated sites, with a capability to eventually catch up its distributed
>>> state. Like global database [1], or something different perhaps (see
>>> causal-real-time consistency model [2],[3]), or even using git. And
>>> probably more than that?.. (looking for ideas)
>> I can offer detail on whatever aspects of the "shared  / global
>> database" idea.  The way we're doing it with Galera for now is all
>> about something simple and modestly effective for the moment, but it
>> doesn't have any of the hallmarks of a long-term, canonical solution,
>> because Galera is not well suited towards being present on many
>> (dozens) of endpoints.     The concept that the StarlingX folks were
>> talking about, that of independent databases that are synchronized
>> using some kind of middleware is potentially more scalable, however I
>> think the best approach would be API-level replication, that is, you
>> have a bunch of Keystone services and there is a process that is
>> regularly accessing the APIs of these keystone services and
>> cross-publishing state amongst all of them.   Clearly the big
>> challenge with that is how to resolve conflicts, I think the answer
>> would lie in the fact that the data being replicated would be of
>> limited scope and potentially consist of mostly or fully
>> non-overlapping records.
>> That is, I think "global database" is a cheap way to get what would be
>> more effective as asynchronous state synchronization between identity
>> services.
> Recently we’ve been also exploring federation with an IdP (Identity Provider) master: https://wiki.openstack.org/wiki/Keystone_edge_architectures#Identity_Provider_.28IdP.29_Master_with_shadow_users
> One of the pros is that it removes the need for synchronization and potentially increases scalability.
> Thanks,
> Ildikó

Best regards,
Bogdan Dobrelya,
Irc #bogdando

More information about the Edge-computing mailing list