Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
58 changes: 5 additions & 53 deletions docs.json
Original file line number Diff line number Diff line change
Expand Up @@ -401,72 +401,36 @@
{
"group": "ML Frameworks",
"pages": [
"models/integrations/catalyst",
"models/integrations/deepchem",
"models/integrations/dspy",
"models/integrations/keras",
"models/integrations/lightgbm",
"models/integrations/mmengine",
"models/integrations/mmf",
"models/integrations/paddledetection",
"models/integrations/paddleocr",
"models/integrations/lightning",
"models/integrations/ignite",
"models/integrations/skorch",
"models/integrations/tensorflow",
"models/integrations/xgboost"
]
},
{
"group": "ML Libraries",
"pages": [
"models/integrations/deepchecks",
"models/integrations/huggingface",
"models/integrations/diffusers",
"models/integrations/autotrain",
"models/integrations/fastai",
"models/integrations/fastai/v1",
"models/integrations/composer",
"models/integrations/openai-gym",
"models/integrations/prodigy",
"models/integrations/pytorch-geometric",
"models/integrations/torchtune",
"models/integrations/scikit",
"models/integrations/simpletransformers",
"models/integrations/spacy",
"models/integrations/stable-baselines-3",
"models/integrations/ultralytics"
"models/integrations/huggingface",
"models/integrations/openai-gym",
"models/integrations/pytorch-geometric"
]
}
]
},
{
"group": "Cloud Platforms",
"pages": [
"models/integrations/sagemaker",
"models/integrations/databricks",
"models/integrations/azure-openai-fine-tuning",
"models/integrations/openai-fine-tuning",
"models/integrations/cohere-fine-tuning",
"models/integrations/openai-api",
"models/integrations/nim"
]
},
{
"group": "Workflow Orchestration",
"pages": [
"models/integrations/kubeflow-pipelines-kfp",
"models/integrations/metaflow",
"models/integrations/dagster",
"models/integrations/hydra"
"models/integrations/openai-api"
]
},
{
"group": "Other",
"pages": [
"models/integrations/docker",
"models/integrations/tensorboard",
"models/integrations/w-and-b-for-julia",
"models/integrations/yolox",
"models/integrations/yolov5"
]
Expand All @@ -480,27 +444,15 @@
{
"group": "Fundamentals",
"pages": [
"models/tutorials/experiments",
"models/tutorials/tables",
"models/tutorials/sweeps",
"models/tutorials/artifacts",
"models/tutorials/workspaces",
"models/tutorials/weave_models_registry",
"models/evaluate-models"
]
},
{
"group": "Framework Tutorials",
"pages": [
"models/tutorials/keras",
"models/tutorials/keras_models",
"models/tutorials/keras_tables",
"models/tutorials/pytorch",
"models/tutorials/lightning",
"models/tutorials/tensorflow",
"models/tutorials/tensorflow_sweeps",
"models/tutorials/xgboost_sweeps",
"models/tutorials/huggingface"
"models/tutorials/xgboost_sweeps"
]
},
{
Expand Down
154 changes: 82 additions & 72 deletions models/integrations/keras.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -3,52 +3,66 @@ title: Keras
---
import { ColabLink } from '/snippets/en/_includes/colab-link.mdx';

<ColabLink url="https://colab.research.google.com/github/wandb/examples/blob/master/colabs/intro/Intro_to_Weights_%26_Biases_keras.ipynb" />
{/* <ColabLink url="https://colab.research.google.com/github/wandb/examples/blob/master/colabs/intro/Intro_to_Weights_%26_Biases_keras.ipynb" /> */}

## Keras callbacks

W&B has three callbacks for Keras, available from `wandb` v0.13.4. For the legacy `WandbCallback` scroll down.
Use Keras callbacks to track experiments, log model checkpoints, and visualize model predictions. Keras callbacks are available in the `wandb.integration.keras` module with Pyhon SDK versions `0.13.4` and above.

W&B Keras integration provides the following callbacks:

- **`WandbMetricsLogger`** : Use this callback for [Experiment Tracking](/models/track/). It logs your training and validation metrics along with system metrics to W&B.

- **`WandbModelCheckpoint`** : Use this callback to log your model checkpoints to W&B [Artifacts](/models/artifacts/).

- **`WandbEvalCallback`**: This base callback logs model predictions to W&B [Tables](/models/tables/) for interactive visualization.

These new callbacks:
## Install and import Keras integration

* Adhere to Keras design philosophy.
* Reduce the cognitive load of using a single callback (`WandbCallback`) for everything.
* Make it easy for Keras users to modify the callback by subclassing it to support their niche use case.
Install the latest version of W&B.

```bash
pip install -U wandb
```

To use the Keras integration, import required classes from `wandb.integration.keras`:

## Track experiments with `WandbMetricsLogger`

<ColabLink url="https://github.com/wandb/examples/blob/master/colabs/keras/Use_WandbMetricLogger_in_your_Keras_workflow.ipynb" />
```python
import wandb
from wandb.integration.keras import WandbMetricsLogger, WandbModelCheckpoint, WandbEvalCallback
```

The following sections describe each callback in detail with code examples.

## Track experiments with `WandbMetricsLogger`

`WandbMetricsLogger` automatically logs Keras' `logs` dictionary that callback methods such as `on_epoch_end`, `on_batch_end` etc, take as an argument.
<ColabLink url="https://colab.research.google.com/github/wandb/examples/blob/master/colabs/keras/Use_WandbMetricLogger_in_your_Keras_workflow.ipynb" />

This tracks:
`wandb.integration.keras.WandbMetricsLogger()` automatically logs Keras' `logs` dictionary that callback methods such as `on_epoch_end`, `on_batch_end` etc, take as an argument.

* Training and validation metrics defined in `model.compile`.
* System (CPU/GPU/TPU) metrics.
* Learning rate (both for a fixed value or a learning rate scheduler.
The partial example below shows how to use `WandbMetricsLogger()` in a Keras workflow. First, compile the model with desired optimizer, loss function, and metrics. Then, initialize a W&B run using `wandb.init()`. Finally, pass the `WandbMetricsLogger()` callback to `model.fit()`.

```python
import wandb
from wandb.integration.keras import WandbMetricsLogger
import tensorflow as tf

model.compile(
optimizer = "adam",
loss = "categorical_crossentropy",
metrics = ["accuracy", tf.keras.metrics.TopKCategoricalAccuracy(k=5, name='top@5_accuracy')]
)

# Initialize a new W&B Run
wandb.init(config={"bs": 12})
with wandb.init(config={"batch_size": 64}) as run:

# Pass the WandbMetricsLogger to model.fit
model.fit(
X_train, y_train, validation_data=(X_test, y_test), callbacks=[WandbMetricsLogger()]
)
# Pass the WandbMetricsLogger to model.fit
model.fit(
X_train, y_train, validation_data=(X_test, y_test), callbacks=[WandbMetricsLogger()]
)
```

### `WandbMetricsLogger` reference
The previous example logs training and validation metrics such as `loss`, `accuracy`, and `top@5_accuracy` to W&B at the end of each epoch. It also logs:


### `WandbMetricsLogger` reference

| Parameter | Description |
| --------------------- | --------------------------------------------------------------------------------------------------------------------------------- |
Expand All @@ -57,11 +71,11 @@ model.fit(

## Checkpoint a model using `WandbModelCheckpoint`

<ColabLink url="https://github.com/wandb/examples/blob/master/colabs/keras/Use_WandbModelCheckpoint_in_your_Keras_workflow.ipynb" />
<ColabLink url="https://colab.research.google.com/github/wandb/examples/blob/master/colabs/keras/Use_WandbModelCheckpoint_in_your_Keras_workflow.ipynb" />

Use `WandbModelCheckpoint` callback to save the Keras model (`SavedModel` format) or model weights periodically and uploads them to W&B as a `wandb.Artifact` for model versioning.

This callback is subclassed from [`tf.keras.callbacks.ModelCheckpoint`](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/ModelCheckpoint) ,thus the checkpointing logic is taken care of by the parent callback.
This callback is subclassed from [`tf.keras.callbacks.ModelCheckpoint()`](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/ModelCheckpoint) ,thus the checkpointing logic is taken care of by the parent callback.

This callback saves:

Expand All @@ -71,25 +85,25 @@ This callback saves:
* Only model weights or the whole model.
* The model either in `SavedModel` format or in `.h5` format.

Use this callback in conjunction with `WandbMetricsLogger`.
Use this callback in conjunction with `WandbMetricsLogger()`.

```python
import wandb
from wandb.integration.keras import WandbMetricsLogger, WandbModelCheckpoint

# Initialize a new W&B Run
wandb.init(config={"bs": 12})

# Pass the WandbModelCheckpoint to model.fit
model.fit(
X_train,
y_train,
validation_data=(X_test, y_test),
callbacks=[
WandbMetricsLogger(),
WandbModelCheckpoint("models"),
],
)
with wandb.init(config={"bs": 12}) as run:

# Pass the WandbModelCheckpoint to model.fit
model.fit(
X_train,
y_train,
validation_data=(X_test, y_test),
callbacks=[
WandbMetricsLogger(),
WandbModelCheckpoint("models"),
],
)
```

### `WandbModelCheckpoint` reference
Expand Down Expand Up @@ -119,7 +133,7 @@ WandbModelCheckpoint(

### Efficiently log checkpoints on a TPU architecture

While checkpointing on TPUs you might encounter `UnimplementedError: File system scheme '[local]' not implemented` error message. This happens because the model directory (`filepath`) must use a cloud storage bucket path (`gs://bucket-name/...`), and this bucket must be accessible from the TPU server. We can however, use the local path for checkpointing which in turn is uploaded as an Artifacts.
While checkpointing on TPUs you might encounter `UnimplementedError: File system scheme '[local]' not implemented` error message. This happens because the model directory (`filepath`) must use a cloud storage bucket path (`gs://bucket-name/...`), and this bucket must be accessible from the TPU server. Instead, W&B uses the local path for checkpointing which in turn is uploaded as an artifact.

```python
checkpoint_options = tf.saved_model.SaveOptions(experimental_io_device="/job:localhost")
Expand All @@ -132,16 +146,16 @@ WandbModelCheckpoint(

## Visualize model predictions using `WandbEvalCallback`

<ColabLink url="https://github.com/wandb/examples/blob/e66f16fbe7ae7a2e636d59350a50059d3f7e5494/colabs/keras/Use_WandbEvalCallback_in_your_Keras_workflow.ipynb" />
<ColabLink url="https://colab.research.google.com/github/wandb/examples/blob/master/colabs/keras/Use_WandbEvalCallback_in_your_Keras_workflow.ipynb" />

The `WandbEvalCallback` is an abstract base class to build Keras callbacks primarily for model prediction and, secondarily, dataset visualization.
The `WandbEvalCallback()` is an abstract base class to build Keras callbacks primarily for model prediction and, secondarily, dataset visualization.

This abstract callback is agnostic with respect to the dataset and the task. To use this, inherit from this base `WandbEvalCallback` callback class and implement the `add_ground_truth` and `add_model_prediction` methods.
This abstract callback is agnostic with respect to the dataset and the task. To use this, inherit from this base `WandbEvalCallback()` callback class and implement the `add_ground_truth` and `add_model_prediction` methods.

The `WandbEvalCallback` is a utility class that provides methods to:
The `WandbEvalCallback()` is a utility class that provides methods to:

* Create data and prediction `wandb.Table` instances.
* Log data and prediction Tables as `wandb.Artifact`.
* Create data and prediction `wandb.Table()` instances.
* Log data and prediction Tables as `wandb.Artifact()`.
* Log the data table `on_train_begin`.
* log the prediction table `on_epoch_end`.

Expand Down Expand Up @@ -186,28 +200,24 @@ class WandbClfEvalCallback(WandbEvalCallback):
# ...

# Initialize a new W&B Run
wandb.init(config={"hyper": "parameter"})

# Add the Callbacks to Model.fit
model.fit(
X_train,
y_train,
validation_data=(X_test, y_test),
callbacks=[
WandbMetricsLogger(),
WandbClfEvalCallback(
validation_data=(X_test, y_test),
data_table_columns=["idx", "image", "label"],
pred_table_columns=["epoch", "idx", "image", "label", "pred"],
),
],
)
with wandb.init(config={"hyper": "parameter"}) as run:

# Add the Callbacks to Model.fit
model.fit(
X_train,
y_train,
validation_data=(X_test, y_test),
callbacks=[
WandbMetricsLogger(),
WandbClfEvalCallback(
validation_data=(X_test, y_test),
data_table_columns=["idx", "image", "label"],
pred_table_columns=["epoch", "idx", "image", "label", "pred"],
),
],
)
```

<Note>
The W&B [Artifact page](/models/artifacts/explore-and-traverse-an-artifact-graph/) includes Table logs by default, rather than the **Workspace** page.
</Note>

### `WandbEvalCallback` reference

| Parameter | Description |
Expand All @@ -229,20 +239,20 @@ If you are implementing a callback for model prediction visualization by inherit

## `WandbCallback` [legacy]

Use the W&B library `WandbCallback` Class to automatically save all the metrics and the loss values tracked in `model.fit`.
Use the W&B library `WandbCallback()` Class to automatically save all the metrics and the loss values tracked in `model.fit()`.

```python
import wandb
from wandb.integration.keras import WandbCallback

wandb.init(config={"hyper": "parameter"})
with wandb.init(config={"hyper": "parameter"}) as run:

... # code to set up your model in Keras
# code to set up your model in Keras

# Pass the callback to model.fit
model.fit(
X_train, y_train, validation_data=(X_test, y_test), callbacks=[WandbCallback()]
)
# Pass the callback to model.fit
model.fit(
X_train, y_train, validation_data=(X_test, y_test), callbacks=[WandbCallback()]
)
```

You can watch the short video [Get Started with Keras and W&B in Less Than a Minute](https://www.youtube.com/watch?ab_channel=Weights&Biases&v=4FjDIJ-vO_M).
Expand Down
2 changes: 1 addition & 1 deletion models/integrations/simpletransformers.mdx
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
description: How to integrate W&B with the Transformers library by Hugging Face.
title: Simple Transformers
title: Hugging Face Simple Transformers
---

This library is based on the Transformers library by Hugging Face. Simple Transformers lets you quickly train and evaluate Transformer models. Only 3 lines of code are needed to initialize a model, train the model, and evaluate a model. It supports Sequence Classification, Token Classification \(NER\),Question Answering,Language Model Fine-Tuning, Language Model Training, Language Generation, T5 Model, Seq2Seq Tasks , Multi-Modal Classification and Conversational AI.
Expand Down
2 changes: 1 addition & 1 deletion models/tutorials/keras_models.mdx
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: Keras models
title: Keras
---
import { ColabLink } from '/snippets/en/_includes/colab-link.mdx';

Expand Down