Fixes for the Scheduler and Capture modules
ClosedPublic

Authored by TallFurryMan on Aug 8 2018, 7:18 AM.

Details

Summary
  • Adds a type for the captured frame map.
  • Updates captured frames map when a capture completes.
  • Removes use of general variables and prefer activeJob whenever possible in the capture module.
  • Fixes the issue with duplicated scheduler jobs using duplicated sequence jobs. (Issue was in the duration estimation, which was not properly consolidating captured frames for single scheduler jobs.)
  • Fixes evaluation of Hour Angle when checking meridian flip in the case DEC is +/-90 degrees (undefined HA).
  • Fixes unexpected slew when repeating a scheduler job with no pipeline step checked.
  • Fixes dbus error management, reworks function unParkMount to clarify state.
  • Fixes park/unpark looping in unparked/parked state, reworks checkMountParkingStatus.
  • Fixes capture progress at 100% at the beginning of capture.
  • Fixes incoherent use of target prefix in the capture path.
  • Adds a message in the Capture module when a capture aborts.
  • Fixes signature management when an hardcoded prefix is used in the sequence.
  • Updates duplicate job warning, incorrect when two different sequences have the same storage.
  • Fixes no-step job looping on altitude check.
  • Fixes update of captured frames map for sequence jobs already complete.

There are no additional abort detections, so Scheduler may still get stuck if the error doesn't come up at its level. A later change will introduce timeouts for Scheduler steps.

Test Plan

Use as many tests vectors as possible, in various configurations.

  • simple_test tested OK
  • simple_test_no_twilight tested OK
  • duplicated_scheduler_jobs_no_twilight tested OK
  • duplicated_scheduler_jobs_duplicated_sequence_jobs_no_twilight tested OK
  • complex_job tested OK, but there is a bug remaining on guiding deviation

Message "Manual scheduled focusing is not supported" is still annoying when running test vectors in sequence, to be investigated and eventually removed very soon.

Image count update in the Scheduler is still an issue, and is probably not just picking a signal and increasing a count.

There is a spurious folder created with the name of the capture prefix. This is a regression from the signature fix for the prefix issue. Fairly minimal, but not professional.

Guider is doing weird things when used with the simulator. It detects offset on the guide star although the simulator always produce the same image, only noise is different from frame to frame, and this seems to cause issues with drift calculation. Or the settings of my guider page are completely off... However, the guider is able to produce a guiding deviation that exceeds the bounds acceptable for exposure, and capture suspends.
Now this is still an issue in the last differential: capture module suspends for 60 seconds waiting for the guider, guider never really asserts return to bounds, and capture module never asserts the suspend timer neither. Ooof.

To get a guiding abort with the simulator in a development VM, pause the VM for a few minutes or hours.
This will cause a large instant deviation that will abort the guiding procedure.
This currently causes the Scheduler to properly abort, but then loop indefinitely without succeeding in restarting its jobs.

Diff Detail

Repository
R321 KStars
Lint
Automatic diff as part of commit; lint not applicable.
Unit
Automatic diff as part of commit; unit tests not applicable.
TallFurryMan created this revision.Aug 8 2018, 7:18 AM
Restricted Application added a project: KDE Edu. · View Herald TranscriptAug 8 2018, 7:18 AM
Restricted Application added a subscriber: kde-edu. · View Herald Transcript
TallFurryMan requested review of this revision.Aug 8 2018, 7:18 AM

I tested with the complex_job.esl and it's getting stuck after a number of captures. The complex_job.esl is now added to Tests/scheduler so you can test with it.

Yes, this is now way finished. Completion for the whole sequence set is not managed properly, for instance, and I did not touch abort detection.
Here I am implementing the different behavior between scheduler-requested and user-requested procedures. Scheduler-requested will make use of capturedFramesMap, and user-requested will run through the procedure unconditionally.
CapturedFramesMap should provide the frames already stored, and currently it provides the wrong number when I remove captures intentionally. The issue/regression is between the storage analysis and the new dbus transfer in the scheduler...
By the way, it's not possible to forward the QMap directly with dbus?

I think it _could_ be done if you use QDBusVariant, at least according to Google.

Updated changeset. Much testing needed.

Interesting way to introduce a guiding error with the simulator: pause the VM for a few minutes while the Scheduler is capturing. This will cause a hop in the position of the guide star, and the guider will abort.

With that differential, the Scheduler aborts the job so isn't stuck, but cannot recover and loops trying to restart the aborted job. More information later.

TallFurryMan edited the summary of this revision. (Show Details)Aug 9 2018, 6:30 AM
TallFurryMan edited the test plan for this revision. (Show Details)

The issue I just described is due to several bugs in the re-evaluation of aborted jobs.
First main problem is that aborted jobs require the capture storage to be refreshed, else they will be unable to estimate duration and properly forward the capture count to the Capture module.
Second main problem is that the Scheduler is lost when it is running and has a current job and re-evaluation of that job is causing the score to become negative.
Somehow the Scheduler loops trying to execute the job even though it immediately sees that can't be done.

Good progress I think, that should be fine soon.

  1. The capture bar is showing 100% as soon as it starts then goes to normal. Bar should advanced _after_ an image is captured, not before/during since it's confusing to see 100% and still capturing last image.
  2. Capture logs says 2018-08-09T09:43:11 Job requires 5-second Red images, has 5/0 frames captured and will be processed. which probably needs to be reversed 0/5
  3. Capturing 5-second Red image... --> not sure about format here. I think the dash needs to be dropped. Capturing 5 seconds image. Maybe need to use i18np for singular/plural case but it's double value so that might be tricky.
kstars/ekos/capture/capture.cpp
2357

Seems like missing parenthesis

  1. Looks like the progress bar indicator is not reset to 0 when a new job is executed, it remains at 100% until the first image is captured then it counts correctly.
  2. Scheduler is not updating count as images are captured. but the count is updated once all capture images are complete and it moves to the next job.
  1. On stopping scheduler, and restarting KStars. It indicated that the 3rd scheduler job (LRGB) captures are 20/20 when it was not started. The first LRGB job was indeed complete at 20/20.
  2. Scheduler correctly resumed 2nd job (which has a count of 2) and correctly counted captures 30/72

Scheduler fails to starts #2 batch for the second scheduler job and goes into endless loop. Problem is capture module where after one L image is captured, the whole sequence is marked as complete. The scheduler then sees not all images are captures and the cycle repeats.

Fix again completion count, and add (hopefully) definitive explanation on the use of the captured frames map.

Standard jobs with repeats and duplicates now proceed without issue.

Known issues:

  • Simulator may produce a guiding deviation, but apparently Capture module is unable to resume.
  • Aborting a job on altitude/moon/twilight seems to cause no reschedule of the job, possible regression.

I also found a very weird regression: uncheck "Track" for a job with a repeat count, the first batch will not slew but all subsequent batches will...

Also, guiding is not stopping properly when all jobs are done. It's expected that guiding will not stop between jobs, in case those are duplicated, but when everything is done, guiding is still not stopped.

I just stumbled in the situation where there is no "Track" checked, but the Capture module decides a meridian flip is required! This one is indeed unexpected :)
That's really an edge case... What should we do? Probably not engage the flip, if the end-user doesn't want the Scheduler to touch the mount...

About your point on "N-second" format, I believe this is an idiomatic language construct: "a 4-kilometer channel" but "a channel 4 kilometers long"?

I had great difficulties finding that specific grammar rule. I found https://english.stackexchange.com/a/50558 relating to this as an illustration.

Additional funny situation for the meridian flip: my test case involved a meridian flip while pointing to the north pole.
The mount obliged, but did not move at all, and the procedure timed out. Capture module retried the flip, but no additional timeout is configured in that case, so the system hangs there doing nothing.
I'll leave this for a later investigation, I'll enter a bugzilla with the log.

Sorry folks for the stupid question - but how can I get access to these changes? I cannot find the mentioned branch on GitHub.

  • Wolfgang
wreissenberger added a comment.EditedAug 10 2018, 12:19 PM

Not sure whether the problem still exists after your changes, but I found a situation where the scheduler loops endlessly.

If you have two jobs with the same capture file signature (path + filter etc.) and the second has less repeats than the first, the first job will repeat endlessly.

The problem is that Scheduler::updateCompletedJobsCount() overwrites the map entry such that a later evaluation thinks that it has less captures than requested.

Please find included a test illustrating it. Place it under /tmp/kstars_tests

-Wolfgang

No, the other way round. I would like to get the source code first so that I can test it locally.

No, the other way round. I would like to get the source code first so that I can test it locally.

Yes, for some reason I was pretty sure the documentation for arc patch D14684 was there too! That's the command you should use to download a differential.

  1. The capture bar is showing 100% as soon as it starts then goes to normal. Bar should advanced _after_ an image is captured, not before/during since it's confusing to see 100% and still capturing last image.
  2. Capture logs says 2018-08-09T09:43:11 Job requires 5-second Red images, has 5/0 frames captured and will be processed. which probably needs to be reversed 0/5
  3. Capturing 5-second Red image... --> not sure about format here. I think the dash needs to be dropped. Capturing 5 seconds image. Maybe need to use i18np for singular/plural case but it's double value so that might be tricky.
  1. I can't reproduce that in the latest diff if we're talking about the Capture module progress bar. There is, however, a concurrency when using 1-second captures which causes the progress to remain at 100% while processing single-capture sequence jobs, but that's very elusive. With greater exposures I don't see the issue. If actually it shows up on lower-end computers I'll have a look, please tell me.
  2. Fixed
  3. I'll leave it like this, please tell me if you think otherwise. That construct has the advantage to bypass the management code for plurals :)

Not sure whether the problem still exists after your changes, but I found a situation where the scheduler loops endlessly.
If you have two jobs with the same capture file signature (path + filter etc.) and the second has less repeats than the first, the first job will repeat endlessly.
The problem is that Scheduler::updateCompletedJobsCount() overwrites the map entry such that a later evaluation thinks that it has less captures than requested.

Aha, let's have a look into this, thanks. I did actually check for this situation in a more general manner, that is, code should not presume of the order in which the scheduler jobs are sorted in the list, but I should have built a test vector.
-Eric

  • Fix edge case of DEC=+/-90 when calculating HA.
  • Fix unexpected slew when repeating with no pipeline step checked.
TallFurryMan edited the summary of this revision. (Show Details)Aug 10 2018, 9:29 PM
TallFurryMan edited the test plan for this revision. (Show Details)
TallFurryMan edited the test plan for this revision. (Show Details)
  • Fix dbus error management, rework function unParkMount to clarify state.
  • Fix park/unpark looping in unparked/parked state, rework checkMountParkingStatus.
TallFurryMan retitled this revision from [DO NOT MERGE YET] Fix duplicate sequence jobs completion checks. to Fixes for the Scheduler and Capture modules.Aug 11 2018, 12:41 PM
TallFurryMan edited the summary of this revision. (Show Details)
TallFurryMan edited the test plan for this revision. (Show Details)

Go ahead with testing, I think I'm done right now for the hotfix release.

The next problem I'll tackle is the Scheduler getting stuck in a loop when the altitude of the target goes under the limit.

Thanks for the updates. Just tested again now with complex_job.esl and I already captured before, it is is perfect opportunity to test. It evaluated those the 3 jobs:

  1. LRGB 20/20 --> Correct, it's done already
  2. LSOH 9/72 ---> Incorrect. It is 10/72. Since 6x L + 4x SII were captured for this job.
  3. LRGB 20/20 --> Incorrect since the job didn't even begin

Ok, decided to continue with the scheduler. It loaded 2nd job.

  1. It started with Lum when it was already done (6/6)
  2. The 2nd repeated job progressed fine, but I aborted during HA 1/6
  3. When restarted, it started the 2nd scheduler job which is OK, but then in the capture module, it started again at Lum as it was a new job and nothing was captured before. It was suppose to start on 2nd capture of HA (2/6..etc)

So again complex_job.esl is a good overall test for all these cases. Just shutdown KStars in key locations and restart and see where it picks up.

TallFurryMan planned changes to this revision.Aug 11 2018, 9:27 PM

Thanks a million for this test, I had a good time interpreting your results, until I found the prefix issue :)
First thing with complex_jobs is that the warning indicating a duplicated job doesn't trigger for the second job because that warning is testing for the sequence file, not for the storage of the sequence file. That will have to be fixed.

Second thing is: I could reproduce your progress bar at 100% at the beginning of a capture. I located the regression.

Third thing is: I did not spot that specificity in the LRGB sequence first ; the prefix "EasternVeil" makes it store its captures to "/var/tmp/EasternVeil/EasternVeil/Light/*/EasternVeil_*" when run from the Capture or Scheduler modules.
The LSOH_SO sequence will stores its captures to "/var/tmp/EasternVeil/Light/*" if ran from the Capture module, and to "/var/tmp/EasternVeil/EasternVeil/Light/*" if ran from the Scheduler.
The name of the capture will be "EasternVeil_*" for the LRGB sequence, and "Light_*" for the LSOH_SO.
Yes the code is not the same in scheduler.cpp and capture.cpp. Ah, duplicated code when investigating duplicated jobs...
But this makes a mess of interpretation.

This said, it's important to remember that in schedule complex_job, all sequence jobs store their capture in the same folder.

Thanks for the updates. Just tested again now with complex_job.esl and I already captured before, it is is perfect opportunity to test. It evaluated those the 3 jobs:

  1. LRGB 20/20 --> Correct, it's done already
  2. LSOH 9/72 ---> Incorrect. It is 10/72. Since 6x L + 4x SII were captured for this job.
  3. LRGB 20/20 --> Incorrect since the job didn't even begin
  1. OK, fine, that means storage contains at (least) 5xL 5xR 5xG 5xB. This is the final count because of the prefix bug.
  2. 5xL is counted from the first job's, not 6. That the L captured by LSOH_SO are not counted is a direct consequence of the prefix bug.
  3. No that's correct, the second LRGB job has no repeats, so it's completed at the same time as the first LRGB.

A schedule with jobs using the same storage (same target name, same FITSDirectory) only capture a frame type up to the maximum count amongst jobs, not to the sum of counts of jobs. Assuming, of course, that Remember Job Progress is enabled.

I removed my comments for the second test you did, because the prefix bug is causing much headache :)

Oh you're right, the prefix changes this a lot. I will remove it.

Is this good to merge right now or you have any planned changes?

No no keep the prefix, that makes complex_job really complex :)

I wasn't inclined to add changes yesterday, but you got me in an interesting situation, so...five more fixes incoming!

opps sorry I committed the changes. I suppose I can revert that

That's OK, those changes are legit anyway, no regression. Let me sort this out.

Total for this differential:

  • Type for the captured frame map.
  • Fix duplicate sequence jobs completion checks.
  • Rewrite some logs, fix regression on completion, remove duplicate image signal.
  • Improved guiding deviation logs to clarify abort situation.
  • Fix duplicate scheduler jobs using duplicate sequence jobs.
  • Fix edge case of DEC=+/-90 when calculating HA.
  • Fix unexpected slew when repeating with no pipeline step checked.
  • Fix dbus error management, rework function unParkMount to clarify state.
  • Fix park/unpark looping in unparked/parked state, rework checkMountParkingStatus.
  • Fix capture progress at 100% at the beginning of capture.
  • Fix incoherent use of target prefix in the capture path.
  • Add a message in the Capture module when a capture aborts.
  • Fix signature management when an hardcoded prefix is used in the sequence.
  • Update duplicate job warning, incorrect when two different sequences have the same storage.
  • Fix no-step job looping on altitude check.
  • Fix update of captured frames map for sequence jobs already complete.

We're, well, progressing. But so many issues there were...

Sorry for breaking the differential on my previous commit, juggled too much with branches...

Good to go for yet another round of tests!!

I'll update the differential text later, if this is needed...

Excellent thanks.

  1. As soon as first job finishes. 3rd job is marked as 20/20
  2. Is the scheduler support to update the capture count after each capture? or just when evaluating? I'm not sure, but it's not updating after each capture.
  3. I stopped KStars on 2nd scheduler just before H_Alpha, so I expected the capture job to pick it up there but instead it started at Lum like last time.

  1. This is the expected behavior.
  2. No, I mentioned I would not try to fix the capture count until the capture module is OK. Besides, I studied the problem and concluded that the capture module could not notify that simply enough. I'll only do a storage refresh periodically. If some capture disappears for an external reason, any other solution would be a mess...
  3. I'll have a look at the log to understand what happened.

Ok I don't understand the bit on expected behavior. Shouldn't the 3rd job be 0/20 up until it starts?

From the code perspective, it can't because it uses the same storage space as the first job. It is evaluated independently from the first job, and arrives at the same conclusion.

From the user perspective, the main objective of the scheduler is to ensure N captures are made, translating to X hours of imaging. One solution is to setup one single job with enough captures for say 60 hours of imaging, which will get partitioned over multiple nights. If you want to fit another job in-between, you can use START_AT/FINISH_AT fixed-time jobs, and still keep track of the number of captures.

There is still work to do on priorities, and interrupting jobs, which will make this feature clearer: jobs which update their priority at the end of a batch to let others run, or jobs which interrupt others when they are ready to start.

If you want to see job #3 at 0/20 after job #1 finished, "Remember Job Progress" is the option you want to disable.

Besides, it reminds me I did not move that option from the "capture" tab to the "scheduler" tab in settings. The option can only really apply to scheduler jobs actually: in terms of use cases, if it applied to sequence jobs we would need to modify sequence files to increase counts, or tidy the storage up manually, and that is not user-friendly compared to focusing on imaging hours in the scheduler.

Agreed, this will need a neat documentation because the implementation is getting on par with what the UI allows to do. As I mentioned a few months ago, my own objective is to make the scheduler a robust fire-and-forget tool over a week at a minimum, suitable for imaging, photometry and object monitoring :)

TallFurryMan added a comment.EditedAug 13 2018, 7:49 PM
  1. I stopped KStars on 2nd scheduler just before H_Alpha, so I expected the capture job to pick it up there but instead it started at Lum like last time.

I'll try to clarify the log, because it's not clear enough for the end-user: consider inside the running batch, the sequence has to provide 6xL, and there are 2 batches to run.
Therefore to complete the L item of the sequence, the Capture module must take 6xL right now.
Even if there are already, say, 4xL in storage, right now the Capture module has to take 6xL to complete the sequence, not 2xL (I'm not considering the LRGB one for the sake of clarity here).
On the next batch, there will be 10xL in storage, thus the Capture module will have to take 2xL to comply with what the Scheduler requested, that is, two batches of 6xL.

This behavior is my last change to the Scheduler/Capture pair, it actually fixes the last completion issue that was remaining.
I must admit this last bit was quite hard to diagnose and fix properly: it's counter-intuitive to consider that only the last batch of capture does require a partial capture.
Actually, that's quite logical: in order to complete the sequence optimally, each capture item must gather a maximum of frame types on the first run.
But in fact, the root cause for this is that once the capture file is in the storage, the information about which sequence item provided it is lost.

I've got two additional fixes being prepared (image update in the scheduler list, and removal of the manual scheduled focus message) but I strongly suggest we land this differential as soon as possible as it's getting really large.
@wreissenberger were you able to test on your side?

Also one later fix because I noticed that the Capture module did create a folder with the signature string.
Because I changed the signature to include the prefix in order to fix the hardcoded prefix issue, I need to remove the prefix from the folder creation procedure.
That's fairly minimal, but surprising and not very professional :)

TallFurryMan edited the summary of this revision. (Show Details)Aug 13 2018, 8:01 PM
TallFurryMan edited the test plan for this revision. (Show Details)

Also again, field "Profile" doesn't seem to be properly connected to each job data sheet.
It is not possible to create a first scheduler job for one profile, and a second for another.
Either this is by design and the field is mislocated, or this is a bug.

TallFurryMan edited the summary of this revision. (Show Details)Aug 13 2018, 8:04 PM
mutlaqja accepted this revision.Aug 13 2018, 8:50 PM
This revision is now accepted and ready to land.Aug 13 2018, 8:50 PM

I'll go ahead and merge this for now.

This revision was automatically updated to reflect the committed changes.

@TallFurryMan: I did not run the entire test set yet, but my problem mentioned on Fri, Aug 10, 2:19 PM still exists. The attached test case illustrate the wrong behavior.

Not sure whether the problem still exists after your changes, but I found a situation where the scheduler loops endlessly.

If you have two jobs with the same capture file signature (path + filter etc.) and the second has less repeats than the first, the first job will repeat endlessly.

The problem is that Scheduler::updateCompletedJobsCount() overwrites the map entry such that a later evaluation thinks that it has less captures than requested.

Please find included a test illustrating it. Place it under /tmp/kstars_tests

-Wolfgang

-Wolfgang

@TallFurryMan: I did not run the entire test set yet, but my problem mentioned on Fri, Aug 10, 2:19 PM still exists. The attached test case illustrate the wrong behavior.

Not sure whether the problem still exists after your changes, but I found a situation where the scheduler loops endlessly.

If you have two jobs with the same capture file signature (path + filter etc.) and the second has less repeats than the first, the first job will repeat endlessly.

The problem is that Scheduler::updateCompletedJobsCount() overwrites the map entry such that a later evaluation thinks that it has less captures than requested.

Please find included a test illustrating it. Place it under /tmp/kstars_tests

-Wolfgang

-Wolfgang

I did test that and there was no issue. Let me try again.

Thank you Jasem, I'll move on.

@wreissenberger I tested your vector with the landed differential, and it succeeded in capturing the job with 3 batches, then the job with 2 batches.
While I didn't test specifically what could happen if I aborted the Scheduler in the middle of this test, I don't expect it to loop. I'll check again later.

Hm, strange. I checked out the latest version of D14684, compile it and run the test. When I run it, the first of the two jobs loops endlessly instead of terminating after three runs. See the attached log file.

Is it possible that we have different parameter settings?
-Wolfgang

@wreissenberger I think I understand what happens in the log:

"Job 'Kocab' 1x1\" sees 0 captures in output folder '/var/tmp/kstars_tests/Kocab/Light/Luminance/Light'."

When the capture is stored, it goes to:

"FITS" file saved to "/var/tmp/kstars_tests/Kocab/Light/Luminance/Light_001.fits"

So yes, we're in a situation where the "output folder" is incorrect...
Now, why is the exact same scheduler file is working on my side, mmmh... I'll compare with my own log of your test vector.

@wreissenberger I don't see this part of my log in your log:

[2018-08-14T08:42:37.280 CEST DEBG ][   org.kde.kstars.ekos.capture] - Preparing capture job "/var/tmp/kstars_tests/Kocab/Light/Luminance/Light_Luminance_1_secs" for execution.
[2018-08-14T08:42:37.280 CEST INFO ][   org.kde.kstars.ekos.capture] - "Job requires 1-second Luminance images, has 0/1 frames captured and will be processed."
[2018-08-14T08:42:37.284 CEST INFO ][   org.kde.kstars.ekos.capture] - "Capturing 1-second Luminance image..."
[2018-08-14T08:42:37.287 CEST INFO ][ org.kde.kstars.ekos.scheduler] - "Job 'Kocab' capture is in progress..."
[2018-08-14T08:42:38.276 CEST DEBG ][ org.kde.kstars.ekos.scheduler] - Checking job stage for "Kocab" startup 2 "14/08/18 08:42" state 3
[2018-08-14T08:42:38.928 CEST INFO ][   org.kde.kstars.ekos.capture] - "Received image 1 out of 1."
[2018-08-14T08:42:38.931 CEST DEBG ][   org.kde.kstars.ekos.capture] - All capture jobs complete.
[2018-08-14T08:42:39.276 CEST DEBG ][ org.kde.kstars.ekos.scheduler] - Checking job stage for "Kocab" startup 2 "14/08/18 08:42" state 3
[2018-08-14T08:42:39.277 CEST DEBG ][ org.kde.kstars.ekos.scheduler] - Find next job...
[2018-08-14T08:42:39.278 CEST DEBG ][ org.kde.kstars.ekos.scheduler] - "Searching in path '/var/tmp/kstars_tests/Kocab/Light/Luminance', files 'Light_Luminance_1_secs*' for prefix 'Light_Luminance_1_secs'..."
[2018-08-14T08:42:39.278 CEST DEBG ][ org.kde.kstars.ekos.scheduler] - "> Found 'Light_Luminance_1_secs_2018-08-14T08-42-38_001'"
[2018-08-14T08:42:39.278 CEST DEBG ][ org.kde.kstars.ekos.scheduler] - Frame map summary:
[2018-08-14T08:42:39.278 CEST DEBG ][ org.kde.kstars.ekos.scheduler] -   "/var/tmp/kstars_tests/Kocab/Light/Luminance/Light_Luminance_10_secs" : 0
[2018-08-14T08:42:39.278 CEST DEBG ][ org.kde.kstars.ekos.scheduler] -   "/var/tmp/kstars_tests/Kocab/Light/Luminance/Light_Luminance_1_secs" : 1
[2018-08-14T08:42:39.314 CEST INFO ][ org.kde.kstars.ekos.scheduler] - "Astronomical twilight rise is at 04:51:00, set is at 23:33:00, and current time is 08:42:39"
[2018-08-14T08:42:39.319 CEST INFO ][ org.kde.kstars.ekos.scheduler] - "Job 'Kocab' 1x1\"  sees 1 captures in output folder '/var/tmp/kstars_tests/Kocab/Light/Luminance/Light_Luminance_1_secs'."
[2018-08-14T08:42:39.319 CEST INFO ][ org.kde.kstars.ekos.scheduler] - "Job 'Kocab' 1x1\"  has completed 1/3 of its required captures in output folder '/var/tmp/kstars_tests/Kocab/Light/Luminance/Light_Luminance_1_secs'."
[2018-08-14T08:42:39.320 CEST INFO ][ org.kde.kstars.ekos.scheduler] - "Job 'Kocab' estimated to take 00h 01m 34s to complete."
[2018-08-14T08:42:39.321 CEST INFO ][ org.kde.kstars.ekos.scheduler] - "Job 'Kocab' target altitude is 33 degrees at 14/08/18 08:42 (score +8)."
[2018-08-14T08:42:39.325 CEST INFO ][ org.kde.kstars.ekos.scheduler] - "Job 'Kocab' target is 75.0026 degrees from Moon (score +20)."
[2018-08-14T08:42:39.327 CEST INFO ][ org.kde.kstars.ekos.scheduler] - "Job 'Kocab' has a total score of 28"
[2018-08-14T08:42:39.327 CEST INFO ][ org.kde.kstars.ekos.scheduler] - "Job 'Kocab' will be run at 14/08/18 08:42."

But this is when one job has finished, and the Scheduler looks for the next. Given you said the first job never finished, it's probably normal I don't see it.
However, I can't find "Preparing capture job" specifically, which should appear when Capture starts a sequence. So that you should definitely have in your log.
So my guess, possibly, is that you are not testing with the latest differential...

@TallFurryMan: OK, if you re-tested it, let's assume that I am making some mistake in obtaining the same codebase. Since I am very new in using the Phabricator, I wouldn't be surprised.

Well this is merged back to master, so please check against master now. If testing goes OK, we can release 2.9.8