Run 23264510315 — Timing Analysis
GitHub Actions Run
|
Time |
| Total wall-clock |
7m 3s (423s) |
| Agentic LLM reasoning (excluded) |
4m 1s (241s) |
| Non-reasoning overhead |
3m 2s (182s) |
Hierarchical Timing Breakdown
1. activation job — 7s (20:04:13–20:04:20)
| Step |
Time |
| Runner provisioning + action repository downloads (3 actions) |
3s |
gh-aw-actions/setup (copy 462 JS files to runner) |
1s |
actions/checkout + secret validation + create activation artifact |
3s |
2. Inter-job scheduling: activation → agent — 6s
3. agent job — 360s total, 119s non-reasoning (20:04:26–20:10:26)
3a. Pre-execution setup — 28s (20:04:26–20:04:53)
| Step |
Time |
| Runner provisioning + action downloads (5 actions) |
2s |
gh-aw-actions/setup (copy 462 JS files) |
2s |
actions/checkout |
1s |
Assess FV state + compute task weights (gh CLI queries + Python) |
1s |
install_copilot_cli.sh |
3s |
install_awf_binary.sh v0.24.2 |
1s |
Docker image pull — 6 images in parallel (gh-aw-firewall/agent, api-proxy, squid, gh-aw-mcpg, github-mcp-server, node:lts-alpine) |
9s |
Safe outputs setup (tools_meta.json + API key gen + env vars) |
2s |
| MCP gateway startup (launch container, poll for ready, check MCP servers) |
7s |
| → of which: hard-coded initialization sleep |
5s |
| → of which: health check + MCP connectivity verification |
2s |
repo-memory artifact download |
<1s |
3b. Execute Copilot CLI step — setup portion — 22s (20:04:53–20:05:15)
| Step |
Time |
| Config generation + firewall network + iptables |
2s |
| Container creation (4 containers: agent, api-proxy, squid, iptables-init) |
2s |
awf-api-proxy + awf-squid health checks |
6s |
awf-agent health check |
6s |
| Entrypoint: DNS config, chroot, user switch, proxy config |
1s |
| Initial startup + unset sensitive env tokens |
5s |
(EXCLUDED: Agentic LLM reasoning — 241s, 20:05:15–20:09:16)
3c. Execute Copilot CLI step — container shutdown — 11s (20:09:16–20:09:27)
| Step |
Time |
docker stop iptables-init + awf-agent (SIGTERM, fast) |
<1s |
docker stop awf-api-proxy + awf-squid (SIGTERM, waiting for graceful exit) |
10s |
| Final exit |
1s |
3d. Post-execute cleanup + threat detection — 58s (20:09:27–20:10:26)
| Step |
Time |
detect_inference_access_error.sh, git config, copy session state |
1s |
stop_mcp_gateway.sh |
1s |
| Upload agent output + session artifacts |
2s |
| Threat detection sandbox — total |
55s |
| → Container creation (4 containers) |
2s |
→ awf-api-proxy + awf-squid health checks |
5s |
→ awf-agent health check |
6s |
| → Entrypoint setup (UID/GID adjust, DNS, chroot, proxy config) |
8s |
| → Threat detection LLM inference (claude-sonnet-4.6: 96k tokens in, 1.2k out) |
24s |
→ docker stop awf-api-proxy + awf-squid (10s SIGTERM timeout again) |
10s |
| Final github-script / upload |
1s |
4. Inter-job scheduling: agent → safe_outputs — 14s (critical path)
4b. Inter-job scheduling: agent → push_repo_memory — 6s (parallel, not critical path)
5. push_repo_memory job — 5s (20:10:32–20:10:37) (parallel)
| Step |
Time |
Runner init + action downloads + gh-aw-actions/setup |
3s |
github-script (push repo-memory commit) |
1s |
6. safe_outputs job — 13s (20:10:40–20:10:53) (critical path)
| Step |
Time |
| Runner provisioning + action downloads |
6s |
gh-aw-actions/setup |
1s |
download-artifact (×2) + actions/checkout |
3s |
github-script (create issue/PR — failed: permission denied) |
3s |
7. Inter-job scheduling: safe_outputs → conclusion — 16s (critical path)
8. conclusion job — 7s (20:11:09–20:11:16)
| Step |
Time |
| Runner provisioning + action downloads |
5s |
gh-aw-actions/setup + download-artifact |
2s |
Summary by Category
| Category |
Time |
% of non-reasoning |
| Threat detection (container overhead + LLM) |
55s |
30% |
| Inter-job scheduling gaps (critical path) |
36s |
20% |
| Execute Copilot CLI container setup + shutdown |
33s |
18% |
| Agent pre-execution setup |
28s |
15% |
safe_outputs + conclusion + push_repo_memory job overhead |
24s |
13% |
| Activation job |
7s |
4% |
| Total |
183s |
|
Top 5 Improvements
#1 — Consolidate post-agent jobs into a single finalize job (~43s saving)
push_repo_memory, safe_outputs, and conclusion are three tiny serial jobs spending 36s just waiting for GitHub to provision runners (6 + 14 + 16s scheduling gaps) and then 21s total re-running identical boilerplate (runner init + action downloads + gh-aw-actions/setup) three times. Merged into one sequential post-agent job this becomes a single ~2s runner acquisition followed by all the work sequentially. Estimated saving: ~43s off the critical path end-to-end.
#2 — Fix the docker stop SIGTERM wait (~20s saving)
Both the main Execute Copilot CLI step (20:09:16→20:09:26) and the threat detection step (20:10:14→20:10:25) each spend exactly 10s waiting for awf-api-proxy and awf-squid to stop — these containers aren't responding to SIGTERM and hitting Docker's 10-second default timeout before SIGKILL. Adding stop_grace_period: 1s (or passing --time 1 to docker stop) on those two containers would save ~10s × 2 = 20s.
#3 — Reduce awf container health check poll intervals (~14s saving)
Both sandbox executions (main agent + threat detection) each wait ~12s in compose up --wait watching health checks: 6s for api-proxy/squid, then another 6s for awf-agent. These are Docker Compose default intervals (30s check every second after a 30s start period, but empirically taking 6s each). Tuning the healthcheck in the compose spec to interval: 1s, start_period: 0s would collapse each 6s wait to ~1-2s. Total: (12−3) × 2 = ~18s, or ~14s accounting for the main step running inside the timed portion.
#4 — Pre-bake the 6 Docker images into the runner image (~9s saving)
gh-aw-firewall/agent, api-proxy, squid, gh-aw-mcpg, github-mcp-server, and node:lts-alpine are pulled fresh every run for 9s. These are pinned to exact digests (v0.24.2, v0.1.17, v0.32.0) and don't change run-to-run. A custom large runner image or a periodically-refreshed Docker layer cache would eliminate these pulls entirely. This also reduces cold-start time for container health checks since images are already uncompressed. Saving: ~9s.
#5 — Replace the hard-coded 5s MCP gateway initialization sleep with active health polling (~4s saving)
At 20:04:46 the gateway process is launched and then the script unconditionally sleeps 5 seconds (Waiting for gateway to initialize... → 20:04:51 Checking if gateway process is still alive after initialization). At 20:04:51 it immediately passes the health check — meaning the container was ready well before 5 seconds elapsed. Replacing the sleep 5 with a fast poll (until curl -sf http://localhost:$PORT/health; do sleep 0.2; done with a 10s timeout) would bring the gateway startup below ~2s. Saving: ~4s.