Skip to content
This repository was archived by the owner on Dec 17, 2025. It is now read-only.

yllkryeziu/adaptive-compute-rewrite

 
 

Repository files navigation

SkyThought

Github Twitter Hugging Face Collection Discord

Known Issues and Fixes

Known Issues and Fixes

Configuration & Scripts

skythought/evals/tasks/math/math500.yaml

  • Updated dataset_path to qq8933/MATH500.
  • Set dataset_subset: default to ensure compatibility with the offline local cache structure.

Virtual Environment Patches (Hotfixes)

To resolve library incompatibilities and offline loading errors without rebuilding the entire environment, the following patches were applied directly to files in .venv:

skythought/evals/batch/logging/__init__.py

  • Issue: ModuleNotFoundError: No module named 'ray._private.ray_logging.filters'
  • Fix: Commented out unused ray imports to prevent crash on startup.

.venv/.../datasets/features/features.py

  • Issue: AttributeError: module 'pyarrow' has no attribute 'PyExtensionType'
  • Fix: Replaced pa.PyExtensionType with pa.ExtensionType to match the installed pyarrow version.

.venv/.../fsspec/utils.py

  • Issue: ValueError: Invalid pattern: '**' can only be an entire path component
  • Fix: Disabled the strict glob pattern validation check to allow Hugging Face's dataset path patterns.

.venv/.../datasets/builder.py

  • Issue: NotImplementedError: Loading a dataset cached in a LocalFileSystem is not supported.
  • Fix: Bypassed the check that blocked loading datasets from the local filesystem, enabling offline evaluation.

News

  • [2025/02/21] 🎉 We released S*: Test time scaling for code generation (paper, code), a simple and extensible test time scaling framework for code generation.
  • [2025/02/11] 🎉 We released Sky-T1-7B (model) and Sky-T1-mini (model) to demonstrate the potential of RL in further enhancing model's capability beyond distillation.
  • [2025/01/23] ⚡️ We released Sky-T1-32B-Flash (model, data) to tackle overthinking and reduce reasoning sequence lengths while maintaining accuracy.
  • [2025/01/19] 🎉 Chat demo for Sky-T1-32B-Preview is alive! Please check it out!
  • [2025/01/10] 🎉 We have released our Sky-T1-32B-Preview model and data through HuggingFace!

Links

Getting Started

We open source the code and scripts we used for data curation, training, and evaluation for Sky-T1-32B-Preview, you can find more details in each directory.

  • recipes: Recipes - data curation steps and training strategies - for building our models Sky-T1-32B-Flash, Sky-T1-32B-Preview and Sky-T1-7B series.
  • skythought/evals: Our data generation and evaluation library. We provide a convenient CLI for evaluation as well as a Scorer API for scoring during data curation and training (example).
  • skythought/train: Training scripts for Sky-T1. We use Llama-Factory to perform training.
  • skythought/skythought-rl: RL training code for Sky-T1-7B and Sky-T1-mini.

Evaluation

Usage

You can install the latest release from PyPI or from source:

pip install skythought

Installing from source

# Clone the repository
git clone https://github.com/NovaSky-AI/SkyThought.git
cd SkyThought

# Create and activate a virtual environment (using uv here)
uv venv --python 3.10
source .venv/bin/activate

# Install the package in editable mode
uv pip install -e .

Running evaluation is as simple as:

skythought evaluate --model NovaSky-AI/Sky-T1-32B-Preview --task aime24

We support a wide variety of datasets in mathematics, science and coding:

  • AIME'24
  • MATH500
  • GPQADiamond
  • MMLU
  • ARC-Challenge
  • OlympiadBench
  • AMC'23
  • TACO
  • APPS
  • LiveCodeBench
  • MMLU Pro
  • MinervaMath
  • GSM8K
  • AIME'25

For more details, please refer to our evaluation guide and the evaluation README.

Evaluation results

Following, we show our evaluation results for the Sky-T1-32B-Preview model across math, coding, and science benchmarks.

Metric Sky-T1-32B-Preview Qwen-2.5-32B-Instruct QwQ o1-preview
Math500 86.4 81.4 92.2 81.4
AIME2024 43.3 16.7 50.0 40.0
LiveCodeBench-Easy 86.3 84.6 90.7 92.9
LiveCodeBench-Medium 56.8 40.8 56.3 54.9
LiveCodeBench-Hard 17.9 9.8 17.1 16.3
GPQA-Diamond 56.8 45.5 52.5 75.2
OlympiadBench (Math, EN) 59.79 46.74 62.17 59.2

Results on non-reasoning benchmarks

We also evaluate on non-reasoning benchmarks (these are benchmarks for instruction-following, QA, etc) to test whether the model has traded-off capability in other domains for better performance in reasoning-related benchmarks.

Metric Sky-T1-32B-Preview Qwen-2.5-32B-Instruct QwQ-32B-Preview Eval Implementation
MMLU (0 shot; no CoT) 78.36 74.14 71.23 lm_eval
MMLU (5 shot; no CoT) 82.46 82.62 82.32 lm_eval
ARC-C (0 shot; no CoT) 49.49 49.4 49.66 lm_eval
IFEval 75.79 78.74 42.51 lm_eval
LLM-as-a-Judge 9.12 9.19 8.30 fastchat
MGSM (0 shot; direct) 33 42.3 19.07 lm_eval
MGSM (8-shot; direct) 58.4 61.47 58.5 lm_eval
BFCL-v3 53.18 58.92 17.41 BFCL
Arena-Hard 74.79 66.51 52.6 Arena-Hard-Auto

For more details, refer here.

Fully Open-source: Driving Progress Together

We believe that open-source collaboration drives progress, and with Sky-T1-32B-Preview, we are fully committed to empowering the community. We open-source all details (i.e., data, codes, model weights) to enable the community to replicate and improve on our results easily:

Model
Sky-T1-32B-Preview
STILL-2
Journey
QwQ
o1
Data
Code
Report
Math domain
Coding domain
Model Weights

Citation

The code in this repository is mostly described in the post below. Please consider citing this work if you find the repository helpful.

@misc{sky_t1_2025,
  author       = {NovaSky Team},
  title        = {Sky-T1: Train your own O1 preview model within $450},
  howpublished = {https://novasky-ai.github.io/posts/sky-t1},
  note         = {Accessed: 2025-01-09},
  year         = {2025}
}

Acknowledgement

This work is done at Berkeley Sky Computing Lab, with the amazing compute support from Lambda Labs, Anyscale, and Databricks. We would like to express our gratitude for the valuable academic feedback and support from the Still-2 Team, and Junyang Lin from the Qwen Team.

About

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 91.9%
  • Shell 6.8%
  • Jupyter Notebook 1.1%
  • Other 0.2%