Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 0 additions & 2 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,5 +38,3 @@ uv run pre-commit run --all-files
- validates tag/version match,
- builds with `uv build --no-sources`,
- publishes with PyPI Trusted Publishing (OIDC).

No `PYPI_API_TOKEN` secret is required. Trusted Publisher must be configured in the PyPI project settings.
86 changes: 72 additions & 14 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,32 +1,90 @@
# weaviate-engram

Python SDK for Engram by Weaviate.
> [!WARNING]
> **Engram is currently in preview.** While in preview (pre-1.0), the API is subject to breaking changes without notice. Use in production at your own risk.

Engram is a fully managed memory service by Weaviate. It lets you add persistent, personalized memory to AI assistants and agents — no infrastructure to set up or manage. When you add a memory, Engram processes it asynchronously through a background pipeline that extracts, deduplicates, and reconciles facts. Memories are scoped at three levels — project, user, and conversation — which can be mixed and matched freely. Each scope is backed by Weaviate's multi-tenant architecture, ensuring strong isolation between tenants.

## Requirements

- Python 3.11 to 3.14
- [uv](https://docs.astral.sh/uv/)

## Local Development
## Installation

```bash
uv sync --all-groups
pip install weaviate-engram
```

Run quality checks:

```bash
uv run ruff check .
uv run ruff format --check .
uv run mypy .
uv run pytest -q
uv add weaviate-engram
```

## Quick Start

Create a project and get an API key at [console.weaviate.cloud/engram](https://console.weaviate.cloud/engram).

```python
from engram import EngramClient

client = EngramClient(api_key="your-api-key")
```

**Add a memory from a string:**

```python
run = client.memories.add("Alice prefers async Python and avoids Java.", user_id="user_123")
```

**Add a memory from a conversation:**

```python
run = client.memories.add(
[
{"role": "user", "content": "What's the best way to handle retries?"},
{"role": "assistant", "content": "Exponential backoff with jitter is the standard approach."},
{"role": "user", "content": "Got it — I'll use that in my HTTP client."},
],
user_id="user_123",
)
```

## Basic Usage
**Search memories:**

```python
from engram import AsyncEngramClient, EngramClient
results = client.memories.search(query="What does Alice think about Python?", user_id="user_123")
for memory in results:
print(memory.content)
```

**Wait for a run to complete** (memory processing is asynchronous):

```python
status = client.runs.wait(run.run_id, timeout=60.0)
print(status.status) # "completed" or "failed"
print(f"+{len(status.memories_created)} ~{len(status.memories_updated)} -{len(status.memories_deleted)}")
```

## Async Client

An async client is also available:

```python
from engram import AsyncEngramClient

client = AsyncEngramClient(api_key="your-api-key")

client = EngramClient(api_key="example-api-key")
async_client = AsyncEngramClient(api_key="example-api-key")
run = await client.memories.add("Alice prefers async Python and avoids Java.", user_id="user_123")
results = await client.memories.search(query="What does Alice think about Python?", user_id="user_123")
```

## Contributing

See [CONTRIBUTING.md](CONTRIBUTING.md).

## License

This project is licensed under the [BSD 3-Clause License](LICENSE).

## Support

For questions or help, reach out to [support@weaviate.io](mailto:support@weaviate.io).