Produced by OmniGraffle 7.7.1
2018-07-04 12:39:41 +0000
receive path
Layer 1
NIY: done by recv
and timed-event
threads
rbuf
recv
thread(s)
proxy
writer
reader
match
(in sync)
reader
match
(not in
sync)
secondary
reorder
primary
reorder
rmsg
rdata
rdata
guid
hash
table
reader
guid hash table maps all
(local & proxy) endpoint
and participant guids to
the corresponding object.
Pointers to such objects
must be derived from the
guid hash, and may not be
retained across thread
liveliness state updates.
discovery maintains the set of readers that match
the proxy writer and an array of reader pointers
that the data from this proxy writer must be
delivered to. This should be versioned, so
partition changes are (can be) precise. Currently
there is no versioning.
implementation: possibly by using the sequence
number of the writer data as a version number:
that allows for a very simple, parallel, garbage
collector for these sets.
proxy-writer – reader matches switch between not-in-sync and in-
sync; the general idea being that the in-sync ones are only relevant to
heartbeat processing and need not be touched by Data(Frag)
processing. ("Too new" data always gets stored in the primary reorder
admin, whereas the not-in-sync ones are generally interested in old
data and track old data in the secondary reorder admins.)
QOS
QOS
"reader" is a DDSI
reader, which is a
proxy for a local
DCPS reader; "proxy
writer" is a proxy for
a remote DDSI writer.
INIT
NOT-IN-
SYNC
IN-
SYNC
not trans. local conn.
or proxy writer has
no old msgs
rejects
data
next-to-be-delivered
for this conn. matches
next-to-be-delivered
for proxy-writer
rdary
versioning
NIY
else
delivery
thread
store in reader
history cache
regular
data
delivery
queue
builtin/
discovery
threads
builtin/
discovery
delivery
queues
ack &
heartbeat
proc.
thread
heartbeat
queue
delivery queues are filled by recv
thread (by shuffling rdata references
around); heartbeats and acks are a
special case: they have no associated
rdata and are processed as quickly as
possible
Major differences from current implementation:
- proxy writer stores all its reader matches currently in a single tree,
not discriminating between sync/not-in-sync
- all recv processing currently in a single thread
- no QoS changes yet, no versioning of QoS's
receive path
potentially many delivery
threads: if a writer always
delivers via the same
queue, all expected
ordering properties are
retained
given that the kernel can
only do groupWrite, and
not deliver data to an
individual reader, might as
well design to that
behaviour
rdata
defrag
the receive thread requests the O/S kernel to dump the data in large rbufs,
each containing any number of messages; the decoding appends some
information, both for Data and DataFrag sub-messages (the
rdata
elements), and for the embedded QoS lists.
Each
rdata
contains all that is necessary to track it in defragmenting and
reordering admins, and to link it into the delivery queue. This ensures no
(heap) memory allocations are necessary to track an arbitrary number of
messages/fragments in the normal case.
The deserializer operates from the
rdata
elements, it gets a little bit nasty
when a primitive is split over mulitple fragments, but so be it.
NIY: currently
malloc + memcpy
proxy
writer
match
set
guid
set
guid
guid
hash
table
DCPS
reader
entity
DCPS reader entity
owns the DDSI
reader and stores a
pointer and the
GUID
rmsg, rdata
set