Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
37 changes: 34 additions & 3 deletions .github/workflows/test.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -136,8 +136,8 @@ jobs:
done


lint_code:
name: Lint app code
lint_code_js:
name: Lint JavaScript code
runs-on: ubuntu-latest
steps:
- name: Checkout Source code
Expand All @@ -159,4 +159,35 @@ jobs:
env:
NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}

- run: npm run lint:code
- run: npm run lint:code:js


lint_code_py:
name: Lint Python code
runs-on: ubuntu-latest
steps:
- name: Checkout source code
uses: actions/checkout@v5

- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v47
with:
files: '**/*.{md,mdx}'
files_ignore: '!sources/api/*.{md,mdx}'
separator: ","

- name: Get uv
uses: astral-sh/setup-uv@v7

- name: List and lint changed files
env:
ALL_CHANGED_FILES: ${{ steps.changed-files.outputs.all_changed_files }}
run: |
IFS=',' read -ra FILE_ARRAY <<< "$ALL_CHANGED_FILES"
for file in "${FILE_ARRAY[@]}"; do
uv run --with doccmd --with ruff doccmd --language=py --language=python --command="ruff check --quiet" "$file"
done

# TODO: once we fix existing issues, switch to checking the whole sources/ directory
# - run: npm run lint:code:py
7 changes: 5 additions & 2 deletions package.json
Original file line number Diff line number Diff line change
Expand Up @@ -38,8 +38,11 @@
"lint:fix": "npm run lint:md:fix && npm run lint:code:fix",
"lint:md": "markdownlint '**/*.md'",
"lint:md:fix": "markdownlint '**/*.md' --fix",
"lint:code": "eslint .",
"lint:code:fix": "eslint . --fix",
"lint:code": "npm run lint:code:js && npm run lint:code:py",
"lint:code:fix": "npm run lint:code:js:fix",
"lint:code:js": "eslint .",
"lint:code:js:fix": "eslint . --fix",
"lint:code:py": "uv run --with doccmd --with ruff doccmd --language=py --command=\"ruff check --quiet\" ./sources",
"postinstall": "patch-package",
"postbuild": "node ./scripts/joinLlmsFiles.mjs && node ./scripts/indentLlmsFile.mjs"
},
Expand Down
50 changes: 24 additions & 26 deletions sources/academy/platform/getting_started/apify_client.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,21 +40,20 @@ pip install apify-client

After installing the package, let's make a file named **client** and import the Apify client like so:

<!-- group doccmd[all]: start -->

<Tabs groupId="main">
<TabItem value="Node.js" label="Node.js">

```js
// client.js
```js title="client.js"
import { ApifyClient } from 'apify-client';
```

</TabItem>
<TabItem value="Python" label="Python">

```py
# client.py
```py title="client.py"
from apify_client import ApifyClient

```

</TabItem>
Expand All @@ -70,17 +69,14 @@ Before we can use the client though, we must create a new instance of the `Apify
<TabItem value="Node.js" label="Node.js">

```js
const client = new ApifyClient({
token: 'YOUR_TOKEN',
});
const client = new ApifyClient({ token: 'YOUR_TOKEN' });
```

</TabItem>
<TabItem value="Python" label="Python">

```py
client = ApifyClient(token='YOUR_TOKEN')

```

</TabItem>
Expand Down Expand Up @@ -108,7 +104,6 @@ run = client.actor('YOUR_USERNAME/adding-actor').call(run_input={
'num1': 4,
'num2': 2
})

```

</TabItem>
Expand Down Expand Up @@ -136,7 +131,6 @@ const dataset = client.dataset(run.defaultDatasetId);

```py
dataset = client.dataset(run['defaultDatasetId'])

```

</TabItem>
Expand All @@ -149,7 +143,6 @@ Finally, we can download the items in the dataset by using the **list items** fu

```js
const { items } = await dataset.listItems();

console.log(items);
```

Expand All @@ -158,21 +151,20 @@ console.log(items);

```py
items = dataset.list_items().items

print(items)

```

</TabItem>
</Tabs>

<!-- group doccmd[all]: end -->

The final code for running the Actor and fetching its dataset items looks like this:

<Tabs groupId="main">
<TabItem value="Node.js" label="Node.js">

```js
// client.js
```js title="client.js"
import { ApifyClient } from 'apify-client';

const client = new ApifyClient({
Expand All @@ -185,17 +177,14 @@ const run = await client.actor('YOUR_USERNAME/adding-actor').call({
});

const dataset = client.dataset(run.defaultDatasetId);

const { items } = await dataset.listItems();

console.log(items);
```

</TabItem>
<TabItem value="Python" label="Python">

```py
# client.py
```py title="client.py"
from apify_client import ApifyClient

client = ApifyClient(token='YOUR_TOKEN')
Expand All @@ -206,11 +195,8 @@ actor = client.actor('YOUR_USERNAME/adding-actor').call(run_input={
})

dataset = client.dataset(run['defaultDatasetId'])

items = dataset.list_items().items

print(items)

```

</TabItem>
Expand All @@ -224,19 +210,26 @@ Let's change these two Actor settings via the Apify client using the [`actor.upd

First, we'll create a pointer to our Actor, similar to before (except this time, we won't be using `.call()` at the end):

<!-- group doccmd[all]: start -->

<Tabs groupId="main">
<TabItem value="Node.js" label="Node.js">

```js
import { ApifyClient } from 'apify-client';
const client = new ApifyClient({ token: 'YOUR_TOKEN' });

const actor = client.actor('YOUR_USERNAME/adding-actor');
```

</TabItem>
<TabItem value="Python" label="Python">

```py
actor = client.actor('YOUR_USERNAME/adding-actor')
from apify_client import ApifyClient
client = ApifyClient(token='YOUR_TOKEN')

actor = client.actor('YOUR_USERNAME/adding-actor')
```

</TabItem>
Expand All @@ -261,13 +254,18 @@ await actor.update({
<TabItem value="Python" label="Python">

```py
actor.update(default_run_build='latest', default_run_memory_mbytes=256, default_run_timeout_secs=20)

actor.update(
default_run_build='latest',
default_run_memory_mbytes=256,
default_run_timeout_secs=20,
)
```

</TabItem>
</Tabs>

<!-- group doccmd[all]: end -->

After running the code, go back to the **Settings** page of **adding-actor**. If your default options now look like this, then it worked!:

![New run defaults](./images/new-defaults.jpg)
Expand Down
2 changes: 1 addition & 1 deletion sources/platform/integrations/ai/agno.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ While our examples use OpenAI, Agno supports other LLM providers as well. You'll
- _Python environment_: Ensure Python is installed (version 3.8+ recommended).
- _Required packages_: Install the following dependencies in your terminal:

```bash
```shell
pip install agno apify-client
```

Expand Down
2 changes: 1 addition & 1 deletion sources/platform/integrations/ai/crewai.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ This guide demonstrates how to integrate Apify Actors with CrewAI by building a
- **OpenAI API key**: To power the agents in CrewAI, you need an OpenAI API key. Get one from the [OpenAI platform](https://platform.openai.com/account/api-keys).
- **Python packages**: Install the following Python packages:

```bash
```shell
pip install 'crewai[tools]' langchain-apify langchain-openai
```

Expand Down
2 changes: 1 addition & 1 deletion sources/platform/integrations/ai/haystack.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ The last step will be to retrieve the most similar documents.
This example uses the Apify-Haystack Python integration published on [PyPi](https://pypi.org/project/apify-haystack/).
Before we start with the integration, we need to install all dependencies:

```bash
```shell
pip install apify-haystack haystack-ai
```

Expand Down
4 changes: 3 additions & 1 deletion sources/platform/integrations/ai/langchain.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,9 @@ If you prefer to use JavaScript, you can follow the [JavaScript LangChain docum

Before we start with the integration, we need to install all dependencies:

`pip install langchain langchain-openai langchain-apify`
```shell
pip install langchain langchain-openai langchain-apify
```

After successful installation of all dependencies, we can start writing code.

Expand Down
4 changes: 2 additions & 2 deletions sources/platform/integrations/ai/langflow.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,13 +43,13 @@ Langflow can either be installed locally or used in the cloud. The cloud version

First, install the Langflow platform using Python package and project manager [uv](https://docs.astral.sh/uv/):

```bash
```shell
uv pip install langflow
```

After installing Langflow, you can start the platform:

```bash
```shell
uv run langflow run
```

Expand Down
2 changes: 1 addition & 1 deletion sources/platform/integrations/ai/langgraph.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ This guide will demonstrate how to use Apify Actors with LangGraph by building a

- **Python packages**: You need to install the following Python packages:

```bash
```shell
pip install langgraph langchain-apify langchain-openai
```

Expand Down
4 changes: 3 additions & 1 deletion sources/platform/integrations/ai/llama.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,9 @@ You can integrate Apify dataset or Apify Actor with LlamaIndex.

Before we start with the integration, we need to install all dependencies:

`pip install apify-client llama-index-core llama-index-readers-apify`
```shell
pip install apify-client llama-index-core llama-index-readers-apify
```

After successfully installing all dependencies, we can start writing Python code.

Expand Down
2 changes: 1 addition & 1 deletion sources/platform/integrations/ai/milvus.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ Another way to interact with Milvus is through the [Apify Python SDK](https://do

1. Install the Apify Python SDK by running the following command:

```py
```shell
pip install apify-client
```

Expand Down
4 changes: 2 additions & 2 deletions sources/platform/integrations/ai/openai_assistants.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ The image below provides an overview of the entire process:

Before we start creating the assistant, we need to install all dependencies:

```bash
```shell
pip install apify-client openai
```

Expand Down Expand Up @@ -260,7 +260,7 @@ For more information on automating this process, check out the blog post [How we

Before we start, we need to install all dependencies:

```bash
```shell
pip install apify-client openai
```

Expand Down
4 changes: 3 additions & 1 deletion sources/platform/integrations/ai/pinecone.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,9 @@ Another way to interact with Pinecone is through the [Apify Python SDK](https://

1. Install the Apify Python SDK by running the following command:

`pip install apify-client`
```shell
pip install apify-client
```

1. Create a Python script and import all the necessary modules:

Expand Down
2 changes: 1 addition & 1 deletion sources/platform/integrations/ai/qdrant.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ Another way to interact with Qdrant is through the [Apify Python SDK](https://do

1. Install the Apify Python SDK by running the following command:

```py
```shell
pip install apify-client
```

Expand Down
Loading