Replies: 3 comments
-
|
I think my comment on issue #313 is also relevant here. Colleagues are currently struggling to isolate behaviors of interest that are rare and occur later into the video. While re-exporting video segments outside of LabGym is a possible workaround, I believe there’s a more streamlined approach we could implement.
|
Beta Was this translation helpful? Give feedback.
-
|
Using the sorting UI in LabGym is not the only way to sort behavior examples in LabGym. If the video is too long, first, you don't need to generate a pair of examples at each frame, instead, try use larger interval--you don't need to sort all the examples you generated either. You don't need that many examples. Second, if going over examples one by one in the sorting UI is too slow, try native folder system, in which you can view folder content as large icon and preview all the examples and choose which time window to focus on. |
Beta Was this translation helpful? Give feedback.
-
|
This entry is NOT responsive to Henry's suggestion of alternative approaches. Lezio said: As users sort behaviors (animation and pattern-image pairs), the sorted items disappear from view. For example, if a user pauses and then resumes works days later, the remaining "behavior gifs" will be ordered by filename based on the still-unsorted set. This can dismantle ethologically meaningful behavioral sequences, making sequential labeling temporally disjointed. Ideally, we’d like a scrollable timeline for the sorter tool that maintains the time axis of the original video, while preserving all current sorting functionalities. This would let users:
Currently, sorted behaviors are moved to behavior-specific folders under-the-hood - I know that. However, it should be feasible to also preserve references or copies of the animation and pattern image pairs, specifically for rendering the scrollable timeline. Importantly, the scrollable timeline should include color-coded labels to reflect past or ongoing sorting efforts - similar to what Deep Ethogram does for individual video frames. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
User reports that positioning a "cursor" in a long video ("jogging"?),
to locate a section to work on, can require a LOT of clicking. User
might make a macro to click 1000 times on their behalf, but that
shouldn't be necessary, right?
Does this mean a non-uniform jogging needs to be implemented? Or
is this a case where there exists a workaround or better approach
like "use ffmpeg to find and isolate the desired video segment."
On 2025-10-18, Henry asks for more details about the use case
Beta Was this translation helpful? Give feedback.
All reactions