[Openais] Re: ipc rewrite

Steven Dake sdake at redhat.com
Wed Apr 26 12:39:32 PDT 2006


On Wed, 2006-04-26 at 10:48 -0700, Mark Haverkamp wrote:
> On Tue, 2006-04-25 at 16:16 -0700, Steven Dake wrote:
> > The events dropped might be because of priority inversion of the
> > subscription and publish tests.  They should be set to sched-rr:1.  Look
> > at evsbench.  Eventually this will be resolved in a later patch so that
> > priorities are automatically determined.  Let me know what tests you are
> > running to get the "lockup" and I'll see what is wrong with the ipc.
> > 
> > evsbench seems to work properly which is the only way I tested this..
> > 
> > What was the test case for the double free?
> > 
> > With the new code, it will be difficult to run aisexec within gdb
> > because the ipc code will often call pthread_kill to interrupt the poll
> > when the outbound kernel queue is full (this interrupts gdb too sigh).
> > I'd recommend ulimit -c unlimited to create core files and then use
> > gdb ./aisexec corefile
> > 
> > you can use thread 1, thread 2, etc to get to different threads and get
> > backtraces.
> > 
> > I realize this adds extra complication for the developers but it should
> > pay off in the end.
> 
> 
> I think I have a clue as to what is going on.  I added some debug in the
> areas where events were queued for delivery and when they were requested
> by the application.  It seems that somehow my event count variable is
> getting out of sync with how many events are on the queue.  I see from
> the stack trace that clone is called and the delivery function is called
> by another thread.  I am guessing (since I don't have any mutexes in the
> event code) that there are races now in the various event processing and
> delivery functions that can cause inconsistencies in my data structures.
> Does this sound reasonable?
> 
> Mark.
> 
> 

Mark
I found a bug in the way lib_exit_fn is called (or more appropriately
not called sigh) but I'm not sure how this causes the problem.

I have a question about the event service.  It appears that the event
service queues messages and sends a "MESSAGE_RES_EVT_AVAILABLE" to the
library caller.  Why not just send the full event in this condition?

One problem I see is that events can sit in the event queue until a "new
event" is sent which triggers a flush of the events.  If this is
correct, the code should instead always try to flush events only when
the output queue to the library is available for writing.  This keeps us
from blocking.  Here is a scenario could you tell me if it happens?

Applications writes 10000 events which are all queued up.  Last event
that is queued then triggers one read of the event from the dispatch
routine.  Then several hundred events sit around waiting for another
event publish to cause a flush.

This code looks wrong:
inline void notify_event(void *conn)
{
    struct libevt_pd *esip;

    esip = (struct libevt_pd *)openais_conn_private_data_get(conn);

    /*
     * Give the library a kick if there aren't already
     * events queued for delivery.
     */
    if (esip->esi_nevents++ == 0) {
        __notify_event(conn);
    }
}

It would appear to notify only when nevents == 0.  Hence there is one
notification to the api, but there could be many events that should be
read...

I'd prefer to rework all of this so that the IPC layer does all the
queueing operations.  We could easily add priorities to the IPC layer.
We could add a callback to the service handler.  If it is defined, it
will be called when the queue is nearly full (and a dropped event should
be sent and future library events should be dropped by the exec/evt.c)
or when the queue is available because it has flushed out a bit.  If the
callback were defined to NULL, the library connection would be entirely
dropped as happens now with the other services.  We could also make the
size of the queues for each service dynamically settable (so some
services like evt, evs, cpg could have larger queues and other services
like amf, ckpt could have smaller queues to match their needs).

Then the event service could write all events to the dispatch queue if a
subscription is requested.  This would reduce IPC communication and
context switches as well.  All flow control and overflow conditions
would be handled by the IPC layer in ipc.c.

What do you think about such a method?  This would simplify the event
service quite a bit and put all of the ipc in one place.

Regards
-steve




More information about the Openais mailing list