Connection is used in two places SlaveBase (file.so etc) and
SlaveInterface (dolphin etc).
In SlaveInterface it operates in a normal event-driven Qt way with
signals when data is ready. If data is ready and a client reads one
line, it emitted dataReady again next event loop to tell the client to
read the next line.
SlaveBase has a custom event loop. We're either polling the task queue
or blocking for a more low level signal, we don't ever process the Qt
event queue.
The one exception is QCoreApplication::sendPostedEvents(nullptr,
QEvent::DeferredDelete); which we called manually after we've dispatched
each task.
If we're copying a lot of files, SlaveBase reads the first command
there's still many commands left so in the SlaveBase we post an event,
which won't get used, into the queue.
This is a problem as now when we call sendPostedEvents(DefferedDelete)
qApp itterates through the list of pending events in linear time,
without clearing anything. After each command we're itterating through a
bigger and bigger list until we're spending all our CPU time here.
This patch splits Connection to have distinct modes for use by SlaveBase
and SlaveInterface as SlaveBase wasn't going to ever run those queued
events anyway.
Copying 1 million 10 byte text files on my machine:
Before: 14 hours
After: 10 minutes