[Edge-computing] [tripleo][FEMDC] IEEE Fog Computing: Call for Contributions - Deadline Approaching

Bogdan Dobrelya bdobreli at redhat.com
Tue Nov 6 17:44:42 UTC 2018

Folks, I have drafted a few more sections [0] for your /proof reading 
and kind review please. Also left some notes for TBD things, either for 
the potential co-authors' attention or myself :)


On 11/5/18 6:50 PM, Bogdan Dobrelya wrote:
> Update: I have yet found co-authors, I'll keep drafting that position 
> paper [0],[1]. Just did some baby steps so far. I'm open for feedback 
> and contributions!
> PS. Deadline is Nov 9 03:00 UTC, but *may be* it will be extended, if 
> the event chairs decide to do so. Fingers crossed.
> [0] 
> https://github.com/bogdando/papers-ieee#in-the-current-development-looking-for-co-authors 
> [1] 
> https://github.com/bogdando/papers-ieee/blob/master/ICFC-2019/LaTeX/position_paper_1570506394.pdf 
> On 11/5/18 3:06 PM, Bogdan Dobrelya wrote:
>> Thank you for a reply, Flavia:
>>> Hi Bogdan
>>> sorry for the late reply - yesterday was a Holiday here in Brazil!
>>> I am afraid I will not be able to engage in this collaboration with
>>> such a short time...we had to have started this initiative a little
>>> earlier...
>> That's understandable.
>> I hoped though a position paper is something we (all who reads that, 
>> not just you and me) could achieve in a couple of days, without a lot 
>> of research associated. That's a postion paper, which is not expected 
>> to contain formal prove or implementation details. The vision for 
>> tooling is the hardest part though, and indeed requires some time.
>> So let me please [tl;dr] the outcome of that position paper:
>> * position: given Always Available autonomy support as a starting point,
>>    define invariants for both operational and data storage consistency
>>    requirements of control/management plane (I've already drafted some in
>>    [0])
>> * vision: show that in the end that data synchronization and conflict
>>    resolving solution just boils down to having a causally
>>    consistent KVS (either causal+ or causal-RT, or lazy replication
>>    based, or anything like that), and cannot be achieved with *only*
>>    transactional distributed database, like Galera cluster. The way how
>>    to show that is an open question, we could refer to the existing
>>    papers (COPS, causal-RT, lazy replication et al) and claim they fit
>>    the defined invariants nicely, while transactional DB cannot fit it
>>    by design (it's consensus protocols require majority/quorums to
>>    operate and being always available for data put/write operations).
>>    We probably may omit proving that obvious thing formally? At least for
>>    the postion paper...
>> * opportunity: that is basically designing and implementing of such a
>>    causally-consistent KVS solution (see COPS library as example) for
>>    OpenStack, and ideally, unifying it for PaaS operators
>>    (OpenShift/Kubernetes) and tenants willing to host their containerized
>>    workloads on PaaS distributed over a Fog Cloud of Edge clouds and
>>    leverage its data synchronization and conflict resolving solution
>>    as-a-service. Like Amazon dynamo DB, for example, except that fitting
>>    the edge cases of another cloud stack :)
>> [0] 
>> https://github.com/bogdando/papers-ieee/blob/master/ICFC-2019/challenges.md 
>>> As for working collaboratively with latex, I would recommend using
>>> overleaf - it is not that difficult and has lots of edition resources
>>> as markdown and track changes, for instance.
>>> Thanks and good luck!
>>> Flavia
>> On 11/2/18 5:32 PM, Bogdan Dobrelya wrote:
>>> Hello folks.
>>> Here is an update for today. I crated a draft [0], and spend some 
>>> time with building LaTeX with live-updating for the compiled PDF... 
>>> The latter is only informational, if someone wants to contribute, 
>>> please follow the instructions listed by the link (hint: you need no 
>>> to have any LaTeX experience, only basic markdown knowledge should be 
>>> enough!)
>>> [0] 
>>> https://github.com/bogdando/papers-ieee/#in-the-current-development-looking-for-co-authors 
>>> On 10/31/18 6:54 PM, Ildiko Vancsa wrote:
>>>> Hi,
>>>> Thank you for sharing your proposal.
>>>> I think this is a very interesting topic with a list of possible 
>>>> solutions some of which this group is also discussing. It would also 
>>>> be great to learn more about the IEEE activities and have experience 
>>>> about the process in this group on the way forward.
>>>> I personally do not have experience with IEEE conferences, but I’m 
>>>> happy to help with the paper if I can.
>>>> Thanks,
>>>> Ildikó
>>> (added from the parallel thread)
>>>>> On 2018. Oct 31., at 19:11, Mike Bayer <mike_mp at 
>>>>> zzzcomputing.com> wrote:
>>>>> On Wed, Oct 31, 2018 at 10:57 AM Bogdan Dobrelya <bdobreli at 
>>>>> redhat.com> wrote:
>>>>>> (cross-posting openstack-dev)
>>>>>> Hello.
>>>>>> [tl;dr] I'm looking for co-author(s) to come up with "Edge clouds 
>>>>>> data
>>>>>> consistency requirements and challenges" a position paper [0] (papers
>>>>>> submitting deadline is Nov 8).
>>>>>> The problem scope is synchronizing control plane and/or
>>>>>> deployments-specific data (not necessary limited to OpenStack) across
>>>>>> remote Edges and central Edge and management site(s). Including 
>>>>>> the same
>>>>>> aspects for overclouds and undercloud(s), in terms of TripleO; and 
>>>>>> other
>>>>>> deployment tools of your choice.
>>>>>> Another problem is to not go into different solutions for Edge
>>>>>> deployments management and control planes of edges. And for 
>>>>>> tenants as
>>>>>> well, if we think of tenants also doing Edge deployments based on 
>>>>>> Edge
>>>>>> Data Replication as a Service, say for Kubernetes/OpenShift on top of
>>>>>> OpenStack.
>>>>>> So the paper should name the outstanding problems, define data
>>>>>> consistency requirements and pose possible solutions for 
>>>>>> synchronization
>>>>>> and conflicts resolving. Having maximum autonomy cases supported for
>>>>>> isolated sites, with a capability to eventually catch up its 
>>>>>> distributed
>>>>>> state. Like global database [1], or something different perhaps (see
>>>>>> causal-real-time consistency model [2],[3]), or even using git. And
>>>>>> probably more than that?.. (looking for ideas)
>>>>> I can offer detail on whatever aspects of the "shared  / global
>>>>> database" idea.  The way we're doing it with Galera for now is all
>>>>> about something simple and modestly effective for the moment, but it
>>>>> doesn't have any of the hallmarks of a long-term, canonical solution,
>>>>> because Galera is not well suited towards being present on many
>>>>> (dozens) of endpoints.     The concept that the StarlingX folks were
>>>>> talking about, that of independent databases that are synchronized
>>>>> using some kind of middleware is potentially more scalable, however I
>>>>> think the best approach would be API-level replication, that is, you
>>>>> have a bunch of Keystone services and there is a process that is
>>>>> regularly accessing the APIs of these keystone services and
>>>>> cross-publishing state amongst all of them.   Clearly the big
>>>>> challenge with that is how to resolve conflicts, I think the answer
>>>>> would lie in the fact that the data being replicated would be of
>>>>> limited scope and potentially consist of mostly or fully
>>>>> non-overlapping records.
>>>>> That is, I think "global database" is a cheap way to get what would be
>>>>> more effective as asynchronous state synchronization between identity
>>>>> services.
>>>> Recently we’ve been also exploring federation with an IdP (Identity 
>>>> Provider) master: 
>>>> https://wiki.openstack.org/wiki/Keystone_edge_architectures#Identity_Provider_.28IdP.29_Master_with_shadow_users 
>>>> One of the pros is that it removes the need for synchronization and 
>>>> potentially increases scalability.
>>>> Thanks,
>>>> Ildikó

Best regards,
Bogdan Dobrelya,
Irc #bogdando

More information about the Edge-computing mailing list