Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
71 changes: 71 additions & 0 deletions .github/workflows/commit-message-check.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
name: Commit Message Check

on:
pull_request:
types:
- opened
- synchronize
- reopened
- edited
push:
branches-ignore:
- main

jobs:
conventional-commits:
name: conventional-commits
runs-on: ubuntu-latest

steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 0

- name: Validate commit messages
env:
EVENT_NAME: ${{ github.event_name }}
BEFORE_SHA: ${{ github.event.before }}
AFTER_SHA: ${{ github.sha }}
BASE_SHA: ${{ github.event.pull_request.base.sha }}
HEAD_SHA: ${{ github.event.pull_request.head.sha }}
run: |
set -euo pipefail

regex='^(feat|fix|refactor|test|docs|chore|perf|style|revert)(\([a-z0-9][a-z0-9._/-]*\))?(!)?: [^ ].*[^.]$'

if [ "$EVENT_NAME" = "pull_request" ]; then
range="${BASE_SHA}..${HEAD_SHA}"
else
if [ "${BEFORE_SHA}" = "0000000000000000000000000000000000000000" ]; then
range="${AFTER_SHA}"
else
range="${BEFORE_SHA}..${AFTER_SHA}"
fi
fi

echo "Validating commits in range: ${range}"

invalid=0

while IFS=$'\t' read -r sha subject; do
if [ -z "${sha}" ]; then
continue
fi

if [[ "${subject}" =~ ^Merge[[:space:]] ]]; then
continue
fi

if [[ ! "${subject}" =~ ${regex} ]]; then
echo "Invalid commit message: ${sha} ${subject}"
invalid=1
fi
done < <(git log --format='%H%x09%s' "${range}")

if [ "${invalid}" -ne 0 ]; then
echo "Commit message validation failed. Use Conventional Commits."
exit 1
fi

echo "All commit messages passed Conventional Commit validation."
123 changes: 123 additions & 0 deletions AGENTS.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,123 @@
# AGENTS.md for `golemcore-models`

This repository is a shared registry of provider-agnostic `ModelSettings` JSON files used by GolemCore during model discovery.

## Repository contract

- Store shared defaults under `models/<model-id>.json`.
- Store provider-specific overrides under `providers/<provider>/<model-id>.json` only when a provider genuinely needs different defaults.
- JSON files must contain full `ModelSettings` without the `provider` field.
- Prefer stable shared model IDs over dated aliases, snapshots, or temporary rollout IDs.

## Primary data sources

Always prefer official provider documentation over third-party catalogs, SDK enums, forum posts, or blog summaries.

### OpenAI

Primary sources:

- `https://developers.openai.com/api/docs/models`
- model-specific pages under `https://platform.openai.com/docs/models/*`
- relevant official guides when model behavior is described there, for example Codex or GPT-5 guides

Use OpenAI docs to confirm:

- canonical model ID
- supported modalities
- context window
- max output tokens when available
- reasoning effort support, if explicitly documented

### Anthropic

Primary sources:

- `https://docs.anthropic.com/`
- Claude model overview / model reference pages under official Anthropic docs

Use Anthropic docs to confirm:

- canonical model ID or stable alias
- context window
- image support
- current family naming

### Gemini

Primary sources:

- `https://ai.google.dev/gemini-api/docs/models`
- other official Gemini API docs under `https://ai.google.dev/`

Use Gemini docs to confirm:

- canonical model ID
- context window
- multimodal support
- stable alias vs dated preview alias

## Mapping rules

When converting provider docs into `ModelSettings`:

- `displayName`: use the public product name shown in provider docs
- `supportsVision`: `true` only when image input is supported
- `supportsTemperature`: set from explicit provider compatibility guidance when documented; otherwise use the most conservative safe value
- `maxInputTokens`: use the published context window
- `reasoning`: include only when the provider explicitly documents supported reasoning levels, or when the setting is intentionally inherited from the base model family and that inheritance is obvious

## Exclusions

Do not add:

- deprecated models
- dated aliases when a stable alias exists
- snapshots unless the repository deliberately decides to pin them
- audio-only, image-only, realtime-only, TTS, embedding, moderation, or other specialized endpoint models unless the catalog explicitly expands scope

## Validation

Before committing:

- run `jq empty models/*.json` and, if applicable, `jq empty providers/**/*.json`
- keep `README.md` in sync with the supported models actually present in the repository

## Git workflow

- Direct pushes to `main` are prohibited.
- All changes must go through a feature branch and Pull Request.
- Keep commits focused and reviewable.

## Commit messages

Use Conventional Commits.

Format:

`<type>[optional scope]: <description>`

Allowed types:

- `feat`
- `fix`
- `refactor`
- `test`
- `docs`
- `chore`
- `perf`
- `style`
- `revert`

Rules:

- use imperative mood
- keep the subject concise
- do not end the subject with a period
- use a scope when it improves clarity, for example `models`, `readme`, `registry`, or `github`

Examples:

- `feat(models): add gpt-5.3-codex defaults`
- `docs(readme): update supported models table`
- `chore(github): add commit message workflow`
46 changes: 46 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
# golemcore-models

Shared registry layout for GolemCore model defaults.

Lookup order:

1. `providers/<provider>/<model-id>.json`
2. `models/<model-id>.json`

Each JSON file contains full `ModelSettings` without the `provider` field.

This repository intentionally contains only current general-purpose text/vision models used in Model Catalog discovery.

Current families included:

- OpenAI GPT-5 and Codex models
- Anthropic Claude current aliases
- Gemini current text multimodal models

## Supported models

| Family | Model ID | Display name | Vision | Temperature | Reasoning |
| --- | --- | --- | --- | --- | --- |
| OpenAI | `gpt-5.1` | GPT-5.1 | yes | no | `none`, `low`, `medium`, `high` |
| OpenAI | `gpt-5-codex` | GPT-5-Codex | yes | no | `minimal`, `low`, `medium`, `high` |
| OpenAI | `gpt-5.1-codex` | GPT-5.1 Codex | yes | no | `none`, `low`, `medium`, `high` |
| OpenAI | `gpt-5.1-codex-mini` | GPT-5.1 Codex mini | yes | no | `none`, `low`, `medium`, `high` |
| OpenAI | `gpt-5.1-codex-max` | GPT-5.1 Codex Max | yes | no | `none`, `low`, `medium`, `high` |
| OpenAI | `gpt-5.2` | GPT-5.2 | yes | no | `none`, `low`, `medium`, `high`, `xhigh` |
| OpenAI | `gpt-5.2-codex` | GPT-5.2-Codex | yes | no | `low`, `medium`, `high`, `xhigh` |
| OpenAI | `gpt-5.2-pro` | GPT-5.2 pro | yes | no | `medium`, `high`, `xhigh` |
| OpenAI | `gpt-5.3-codex` | GPT-5.3-Codex | yes | no | `low`, `medium`, `high`, `xhigh` |
| OpenAI | `gpt-5.3-codex-spark` | GPT-5.3-Codex-Spark | no | no | no explicit reasoning map |
| OpenAI | `gpt-5.4` | GPT-5.4 | yes | no | `none`, `low`, `medium`, `high`, `xhigh` |
| OpenAI | `gpt-5.4-mini` | GPT-5.4 mini | yes | no | no explicit reasoning map |
| OpenAI | `gpt-5.4-nano` | GPT-5.4 nano | yes | no | no explicit reasoning map |
| OpenAI | `gpt-5.4-pro` | GPT-5.4 pro | yes | no | `medium`, `high`, `xhigh` |
| Anthropic | `claude-opus-4-6` | Claude Opus 4.6 | yes | yes | no explicit reasoning map |
| Anthropic | `claude-sonnet-4-6` | Claude Sonnet 4.6 | yes | yes | no explicit reasoning map |
| Anthropic | `claude-haiku-4-5` | Claude Haiku 4.5 | yes | yes | no explicit reasoning map |
| Gemini | `gemini-2.5-pro` | Gemini 2.5 Pro | yes | yes | no explicit reasoning map |
| Gemini | `gemini-2.5-flash` | Gemini 2.5 Flash | yes | yes | no explicit reasoning map |
| Gemini | `gemini-2.5-flash-lite` | Gemini 2.5 Flash-Lite | yes | yes | no explicit reasoning map |
| Gemini | `gemini-3-flash-preview` | Gemini 3 Flash Preview | yes | yes | `minimal`, `low`, `medium`, `high` |
| Gemini | `gemini-3.1-flash-lite-preview` | Gemini 3.1 Flash-Lite Preview | yes | yes | `minimal`, `low`, `medium`, `high` |
| Gemini | `gemini-3.1-pro-preview` | Gemini 3.1 Pro Preview | yes | yes | `low`, `medium`, `high` |
6 changes: 6 additions & 0 deletions models/claude-haiku-4-5.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
{
"displayName": "Claude Haiku 4.5",
"supportsVision": true,
"supportsTemperature": true,
"maxInputTokens": 200000
}
6 changes: 6 additions & 0 deletions models/claude-opus-4-6.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
{
"displayName": "Claude Opus 4.6",
"supportsVision": true,
"supportsTemperature": true,
"maxInputTokens": 1000000
}
6 changes: 6 additions & 0 deletions models/claude-sonnet-4-6.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
{
"displayName": "Claude Sonnet 4.6",
"supportsVision": true,
"supportsTemperature": true,
"maxInputTokens": 1000000
}
6 changes: 6 additions & 0 deletions models/gemini-2.5-flash-lite.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
{
"displayName": "Gemini 2.5 Flash-Lite",
"supportsVision": true,
"supportsTemperature": true,
"maxInputTokens": 1048576
}
6 changes: 6 additions & 0 deletions models/gemini-2.5-flash.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
{
"displayName": "Gemini 2.5 Flash",
"supportsVision": true,
"supportsTemperature": true,
"maxInputTokens": 1048576
}
6 changes: 6 additions & 0 deletions models/gemini-2.5-pro.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
{
"displayName": "Gemini 2.5 Pro",
"supportsVision": true,
"supportsTemperature": true,
"maxInputTokens": 1048576
}
23 changes: 23 additions & 0 deletions models/gemini-3-flash-preview.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
{
"displayName": "Gemini 3 Flash Preview",
"supportsVision": true,
"supportsTemperature": true,
"maxInputTokens": 1048576,
"reasoning": {
"default": "high",
"levels": {
"minimal": {
"maxInputTokens": 1048576
},
"low": {
"maxInputTokens": 1048576
},
"medium": {
"maxInputTokens": 1048576
},
"high": {
"maxInputTokens": 1048576
}
}
}
}
23 changes: 23 additions & 0 deletions models/gemini-3.1-flash-lite-preview.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
{
"displayName": "Gemini 3.1 Flash-Lite Preview",
"supportsVision": true,
"supportsTemperature": true,
"maxInputTokens": 1048576,
"reasoning": {
"default": "minimal",
"levels": {
"minimal": {
"maxInputTokens": 1048576
},
"low": {
"maxInputTokens": 1048576
},
"medium": {
"maxInputTokens": 1048576
},
"high": {
"maxInputTokens": 1048576
}
}
}
}
20 changes: 20 additions & 0 deletions models/gemini-3.1-pro-preview.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
{
"displayName": "Gemini 3.1 Pro Preview",
"supportsVision": true,
"supportsTemperature": true,
"maxInputTokens": 1048576,
"reasoning": {
"default": "high",
"levels": {
"low": {
"maxInputTokens": 1048576
},
"medium": {
"maxInputTokens": 1048576
},
"high": {
"maxInputTokens": 1048576
}
}
}
}
23 changes: 23 additions & 0 deletions models/gpt-5-codex.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
{
"displayName": "GPT-5-Codex",
"supportsVision": true,
"supportsTemperature": false,
"maxInputTokens": 400000,
"reasoning": {
"default": "medium",
"levels": {
"minimal": {
"maxInputTokens": 400000
},
"low": {
"maxInputTokens": 400000
},
"medium": {
"maxInputTokens": 400000
},
"high": {
"maxInputTokens": 400000
}
}
}
}
Loading
Loading