You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/releases/detailed_releases.md
+19Lines changed: 19 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,3 +1,22 @@
1
+
## Release 2025/11/20
2
+
3
+
-**b845-add-filter-to-global-tags-multi-widget** – Added a search feature to the global tags multi-select widget, making it easier to find tags quickly.
4
+
-**b875-meq-student-home-page-links** – Added links to Module Evaluation Questionnaires (MEQs) on the student home page.
5
+
-**b876-meq-set-navigation** – Renamed “Surveys” to Module Evaluation Questionnaires (MEQs) and improved navigation so students can easily access MEQs for all their modules.
6
+
-**b879-meq-module-stats** – Improved the statistics view so it is clear which sets are MEQ sets.
7
+
-**b880-meq-set-dates-defaults** – Added default hide and release dates for MEQs at the tenant level to simplify setup.
8
+
-**b886-student-progress-in-teacher-view-is-not-aligned-if-it-contains-stars** – Improved the alignment of progress indicators in the teacher view, including when a star (submitted solution by student) is shown.
9
+
10
+
## Release 2025/11/17
11
+
12
+
-**b849-import-tags-with-same-column-name** – Allow columns in the student-upload spreadsheet to have identical titles.
13
+
-**b864-admin-chat-statistics** – Added a new page in the admin view to preview chat usage statistics.
14
+
-**b868-set-card-items-do-not-align-on-some-cards** – Aligned items inside set cards in the teacher view so they line up.
15
+
-**b870-cant-add-links-in-lexdown-except-in-raw-markdown** – Fixed link creation in the Lexdown editor.
16
+
-**b874-making-saving-draft-message-to-appear-subtle** – Made the “saving draft” message for question versions less intrusive.
17
+
-**b882-expanded-list-of-teacher-roles-overflows-window** – Improved UI to ensure the expanded teacher-roles popup is centered.
18
+
-**b886-student-progress-in-teacher-view-is-not-aligned-if-it-contains-stars** – UI fix to ensure student progress bars remain aligned even when they contain stars.
19
+
1
20
## Release 2025/11/06
2
21
3
22
-**b871-set-number-in-mobile-view** - Fixed the display of set numbers shown as part of the question title in the mobile view.
Copy file name to clipboardExpand all lines: docs/releases/status.md
+63-2Lines changed: 63 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,13 +2,74 @@ Lambda Feedback is a cloud-native application that is available with full servic
2
2
3
3
This page contains information about any known incidents where service was interrupted. The page begain in November 2024 following a significant incident. The purpose is to be informative, transparent, and ensure lessons are always learned so that service improves over time.
4
4
5
-
The Severity of incidents is the product of number of users affected (for 100 users, N = 1), magnitude of the effect (scale 1-5 from workable to no service), and the duration (in hours). Severity below 1 is LOW, between 1 and 100 is SIGNIFICANT, and above 100 is HIGH. The severity is used to decide how much we invest in preventative measures, detection, mitigation plans, and rehearsals.
5
+
The Severity of incidents is the product of:
6
+
7
+
- number of users affected (for 100 users, N = 1),
8
+
- magnitude of the effect (scale 1-5 from workable to no service),
9
+
- duration (in hours).
10
+
11
+
Severity:
12
+
13
+
- x < 1 is LOW
14
+
- 1 < x < 100 is SIGNIFICANT
15
+
- x > 100 is HIGH.
16
+
17
+
The severity is used to decide how much we invest in preventative measures, detection, mitigation plans, and rehearsals.
18
+
19
+
## 2025 November 18th: Some evaluation functions failing (Severity: LOW):
20
+
21
+
Some evaluation functions returned errors.
22
+
23
+
### Timeline (UK / GMT)
24
+
25
+
The application was fully available during this time period.
26
+
27
+
2025/11/18 21:18 GMT: some but not all evaluation functions (external microservices) failed. Investigation initiated and message added on home page
28
+
2025/11/18 21:39 GMT: home page updated to users that the cause was identified.
29
+
2025/11/18 21:45 GMT: issue resolved. Home page updated.
30
+
31
+
### Analysis
32
+
33
+
Some of our evaluation functions still use an old version of our baselayer, which calls GitHub to retrieve a schema and validate inputs. GitHub git services were down (https://www.githubstatus.com/incidents/5q7nmlxz30sk), which meant that those of our functions that call GitHub could not validate their schemas and therefore failed. Other evaluation functions had previously been updated to remove the need to call GitHub and were therefore not affected by the issue.
34
+
35
+
The same root cuase meant that we could not push updates to code during the incident, due code being deployed via GitHub. GitHub had announced they were resolving the issue, and when it was resolved our services returned to normal.
36
+
37
+
### Recommended action
38
+
39
+
Update all evaluation function baselayers to remove dependency on external calls when validating.
## 2025 November 10th: Service unresponsive (Severity: SIGNIFICANT):
44
+
45
+
The application was unresponsive.
46
+
47
+
### Timeline (UK / GMT)
48
+
49
+
2025/11/10 14:21 Service became unresponseive, e.g. pages not loading. Reports from users through various channels. Developers began investigating, message sent to Teachers.
50
+
51
+
2025/11/10 14:28 Service returned to normal. Home page message displayed to inform users.
52
+
53
+
### Analysis
54
+
55
+
During the period of unresponsiveness, the key symptoms within the system were overloading the CPU of the servers. Error logging and alerts did successfully detect downtime and alert the developer team, who responded. Although developers were looking into the problem, and tried to increase resource to resolve the problem, in fact the autoscaling solved the problem itself.
56
+
57
+
The underlying cause was a combination of high usage, leading to CPU overload. This type of scenario is normal and correctly triggered autoscaling. The issue in this case was that autoscaling should happen seamlessly, without service interruptions in the intervening period.
58
+
59
+
### Action taken:
60
+
61
+
- Decrease the CPU and memory usage level at which scaling is triggered. This increases overall costs but decreases the chance of service interruptions.
62
+
- Enhance system logs so that more information is available if a similar event occurs
63
+
- Investigate CPU and memory usage to identify opportunities for improvements (outcome: useage is typical for NODE.js applications, no further action)
## 2025 October 17th: Handwriting input temporarily unavailable (Severity: SIGNIFICANT)
8
69
9
70
Handwriting in response areas (but not in the canvas) did not return a preview and could not be submitted. Users received an error in a toast saying that the service would not work. All other services remained operational.
10
71
11
-
###Timeline (UK / BST)
72
+
###Timeline (UK / BST)
12
73
13
74
2025/10/17 08:24 Handwriting inputs ceased to return previews to the user due to a deployed code change that removed redudant code, but also code that it transpired was required.
Copy file name to clipboardExpand all lines: docs/teacher/guides/analytics.md
+4Lines changed: 4 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,13 +3,17 @@
3
3
## Module-level analytics
4
4
5
5
The module overview tab displays cohort progress and cohort activity. Access to the overview tab is subject to teacher role privileges.
6
+

6
7
7
8
The Content tab displays all Sets, with cohort-level data on activity and progress within each Set. Access to the data within the Content tab is subject to teacher role privileges.
9
+

8
10
9
11
Within the content there is a stats tab which shows cohort-level data on question completion and statistics on time spent on each question. Response Areas are listed and show completion rates overall, and best per student. Detailed response statistics are available in the 'Explore' button on the Response area. The stats tab availability is subject to teacher role privileges.
10
12
11
13
The ID of Response Areas is linked between module instances. If a Response Area maintains the response data shape then the data across multiple instances can be combined for stronger statistics. As of 10/7/25 there are no features in the UI to link data across instances, but these links will be added in future (the data saved is in the correct structure to allow this feature on all past data).
12
14
15
+

16
+
13
17
## Analytics and question versions
14
18
15
19
Analytics begin when a question is _published_. After publishing a question for the first time it becomes available to students and their usage is logged and fed back to the student and the teacher.
All content below the line uses Lexdown functionality. Worked solutions can be [branched](https://lambda-feedback.github.io/user-documentation/teacher/guides/good-practice/#branching), or split into [steps](https://lambda-feedback.github.io/user-documentation/teacher/guides/lexdown/#steps-in-worked-solutions). Future developments will add branching and response areas to structured tutorials.
51
+
All content below the line uses the [Lexdown](./lexdown-content-editor.md) content editor functionality. Worked solutions can be [branched](https://lambda-feedback.github.io/user-documentation/teacher/guides/good-practice/#branching), or split into [steps](https://lambda-feedback.github.io/user-documentation/teacher/guides/lexdown/#steps-in-worked-solutions). Future developments will add branching and response areas to structured tutorials.
52
52
53
53
It is not necessary to include all three methods of help. If you only provide content for one tab, only that button will appear in the published student version.
Guidance provides a summary of the task, it's difficulty and the estimated time.
4
+
Guidance is provided at the question level, and can be set to each question in the set.
5
+
6
+
## Editing Guidance
7
+
To edit the question guidance, click the "Edit Guidance" button at the top of the page.
8
+
9
+
Here you can enter four parts:
10
+
1. Guidance Blurb - The short description of the question.
11
+
2. Minimum Time Estimate - The least amount of time the task should take in minutes.
12
+
3. Maximum Time Estimate - The longest amount of time the task should take in minutes.
13
+
4. Skill - The difficulty of the task, rated out of three stars.
14
+
15
+
### Obtaining Guidance Time
16
+
17
+
We also support suggesting the time estimate. This uses machine learning based on worked solution and skill level to determine an estimated time for the task.
18
+
19
+
To use this feature do the following:
20
+
1. Fill in all the question's attributes as much as possible. i.e, the question's text, the worked solution, skill level etc., The more information is filled in, the more accurate the suggested guidance time will be.
21
+
22
+
2. Click on the "Suggest" button in the guidance configuration tab after you have filled in all the question's attributes.
0 commit comments