Aaaah, sorry, I misinterpreted one thing. You're absolutely right, 60sec for aligning makes sense, since it is more than one single astrometry run.
- Queries
- All Stories
- Search
- Advanced Search
Advanced Search
Sep 9 2019
Sep 5 2019
Good idea, Alex. And welcome to the Scheduler-Optimizing-Club :-)
Sep 3 2019
Thanks, understood. But where does the M102 entry for N5457 come from? In the OpenNGC catalog it's only mentioned in the comment. Same for N5866, by the way.
Sep 2 2019
@cdersch: when I understand OpenNGC right, then they do not contain any Messier names. Where do they come from?
Did Christian answer? I learned meanwhile that it is historically not clear, whether NGC5866 is really the M102 from Messier's catalog.
Resolved.
Sep 1 2019
Aug 31 2019
Aug 27 2019
I‘ve been using it now since several weeks and really appreciate the „restart immediately“ function. I typically have 2-3 different targets per night and it is very handy restarting the aborted job after a cloud e.g passed by. In the past few weeks I had several partially cloudy nights and all worked well.
ping :-)
Aug 19 2019
Aug 7 2019
Proposed code cosmetics added
Jasem, many thanks for the corrections, I wasn't aware of rangeHA. A small hint: is there a reason why you left the old code commented out in the code instead of simply deleting it?
Aug 5 2019
Jul 28 2019
Jul 23 2019
Ping :-)
Jul 22 2019
Select "Sub Frame" and "Auto Select Star". Set "Box Size" to a small value, e.g. 128. Then start "Autofocus". Without this fix, the green box should be in the top left corner of the full frame image. With the fix, it is around the focus star.
Jul 21 2019
Re-focusing before capturing reworked, focusing states removed from calibration stages
Jul 20 2019
Jul 17 2019
Sorry, the description is misleading, the focus check is added to startNextExposure(). This is exactly the same place where I added the meridian flip check, where we had the same problem.
Update to the discussion regarding aborted jobs and limits.
Jul 14 2019
Serialization changed as suggested, log messages corrected. Now we need to agree how to proceed with the problem, that restarting aborted jobs immediately conflicts with the idea of having multi-day schedules.
Serializing error strategy corrected, log messages for aborted jobs unified
We need to decide whether we want the scheduler to be able handling multi-day schedules (as currently, without this change) or being able to restart immediately aborted jobs. The latter is important for a robust scheduling during nights with some clouds, where a job gets aborted, but may be continue, as soon as the cloud has passed by.
Many thanks for your comments. I will think about them and come back.
Jul 13 2019
Works as described, good point!
Jul 10 2019
Makes sense, good point!
Jul 2 2019
Hope it's like you meant.
- canRelativeMove added as D-Bus property
- String formatting improved
Jun 23 2019
Jun 19 2019
Jun 18 2019
Jun 11 2019
I am primarily working with PHD2. I've tested #1 also with the internal guider, it showed the same problem as PHD2 and should be fixed. #2 is PHD2 specific and #3 should work with all guiders.
Jun 10 2019
Jun 3 2019
Hi Eric,
no problem, I not made out of sugar :-)
Indeed, I think this feature makes sense - exactly as you described it. Pausing - for example arranging cabling or testing a better guiding setup - makes sense. My typical setup is shooting LLLRGB sequences. When I have the ability to pause in order to fix something and capturing continues exactly the place where I paused, this would be nice to have (but maybe not more).
Many thanks, so let's start the feature branch. As a next step I would like to discuss with you the interaction of Observatory and Scheduler.
Eric, please be so kind and test the pause button on the current master without this diff. There you will see that after pressing the button a) the Scheduler waits for a capture sequence to complete and b) when you press the "Start/Resume" button after the sequence completion, nothing happens.
Jun 2 2019
I guess you do not mean the bug fix part. For this the use case is very simple: "I want it working :-)"
May 29 2019
Checkboxes for weather status actions implemented
Ready-button deactivated, showing only the status
ping :-)
Switched to branch observatory_work
May 26 2019
From my perspective, we could start now with the feature branch. Currently, the Observatory is standalone. As a next step, I would like to implement the interaction with the Scheduler, but this should be handled in a separate diff.
Make observatory ready with one mouse click
May 25 2019
Observatory status added
May 24 2019
- Shutter actions invisible if no shutter present
- Measurement of delay in secs
- Scheduler actions prepared, but left invisible
May 22 2019
Action check box for stopping the scheduler added (not implemented)
Et voilà.
Observatory windows title set to Observatory
Execute actions for weather warnings and alerts
Taking actions due to weather warnings or alerts implemented
May 20 2019
- Handling disconnects for weather and dome added
- UI elements for observatory actions and status added - not implemented yet
Currently, the interference with with other modules isn't that heavy, so technically it is not necessary. Nevertheless, I would prefer a feature branch so that we can launch the module with a mature set of functionality.
May 19 2019
Tool tips added to the Observatory module
D-Bus interface added.
May 15 2019
May 7 2019
May 5 2019
May 2 2019
We follow another path with D20068 - closed.
May 1 2019
Solved in D19456.
Solved in another diff.
Seems like this is not merged into master.
I think this diff has not been merged yet.
Update diff after rebase (second try)
Damn, that was wrong! Please wait for another update...
Rebased upon latest master version.
I'm waiting for Eric's review. I can post a rebased version later if required.
Apr 21 2019
Apr 19 2019
Apr 13 2019
I would opt for 3.2. The behavior is better than without the fix.
Apr 11 2019
When a scheduler job aborts, it does not change the completed frame count. So probably an in-sequence focus failing might do the job?
Aborting, right, but restarting resets the counters to zero.
Back to the original question, how to proceed. I tried to construct a situation where the calculation of captured frames is call after the partial completion of a scheduler job where exactly that problem occurs we discussed above - I failed, I could not bring kstars into such a state. So maybe we have a theoretical discussion here.
Apr 10 2019
Related? https://indilib.org/forum/ekos/4995-ekos-scheduler-eats-frames-from-the-sequence.html#37550
With the latest information posted - YES. It's exactly the situation with a capture job of 18*L+3*R+3*G+3*B.
Turned out meanwhile that it is not related. It is simply missing awareness of the option "Remember job progress". Maybe we should consider moving this option directly to the scheduler tab - or show at least there, that the option is active.
Related? https://indilib.org/forum/ekos/4995-ekos-scheduler-eats-frames-from-the-sequence.html#37550
With the latest information posted - YES. It's exactly the situation with a capture job of 18*L+3*R+3*G+3*B.
It's unclear, but maybe, yes.
Agreed, the issue we are working here is a good hint. I asked for more details on the forum.
Could I offer my own implementation on the two fixes that are in this differential? I'd like to first fix the FindNextJob issue, then in another diff the frame counting via messages from the capture module.
Absolutely fine, I do not have the ambition that I fix it. I simply want it to be fixed asap. :-)
My previous message is about the calculation : the order of operators produces 1 when less than a full sequence is executed. It also considers sequences are distributed equally between jobs, which I disagree with as the code, in remember mode, is trying to gather frames to complete sequences in order, then schedule remaining ones.
You are right, the calculation of captured frames of a certain sequence job is only correct as long as the entire capture job has completed. If it does not run completely, the frames taken in the last cycle are not counted correctly.
Let's take an example with a LLLRGB sequence that completes twice and terminates after two L frames. In the calculation, we have schedJob->getCompletedCount() = 14, capturesPerRepeat=6 and seqJob->getCount() = 3. As a result we get 14/6*3 = 6 - which is wrong, it should be 8.
Apr 7 2019
That's a good idea, but weeell I have two disagreements : first this is integer calculation and you probably need to reorder your operators, and second, if I understand correctly, you are considering the amount of captures to get equiprobable scattering over all sequence jobs.
I'll nonetheless test this asap.
Could you please be more specific? To be honest, I do not understand what you mean.
Apr 6 2019
Update: There was an another bug in calculating whether light frames are required by a schedule job. In the case that "Repeat for x runs" is selected and one sequence has more captures than required and another one has less, the scheduler job assumes than no light frames are required - which is wrong.
Can you please check this?
https://indilib.org/forum/general/4908-meridian-flip-issue-with-the-scheduler.html
Resolved with D20150
Mar 31 2019
Mar 29 2019
Hm, maybe, but I think it's a different issue. But thanks for the hint, I will try to reproduce it.
Mar 28 2019
OK, but I checked for the reordering problem:
- It's possible to sort jobs while the Scheduler is running using the "Reset state and sort observations" button. That is not a regression, that is an existing bug.
- It's possible to reorder and reset jobs while the Scheduler is running using the "Reset state and force reevaluation" button. That is a regression in this differential.
Eric, I cannot reproduce it. When I start the scheduler, all buttons on the top left side of the queue are deactivated.