This repository provides an example implementation of a tool with a single feature SEARCH-FS,
which upon performs the following:
- given a request payload;
- accesses a path to a directory within a file system;
- iterates recursively through all file objects within the given diectory;
- notifies an instance of Rabbit MQ for each file found.
-
python 3.11--3.14 (currently tested with v3.14)
-
justfile tool version
^1.4.* -
(optional) Postman. Cf. the wiki for an prepared environment + collection.
NOTE: We primarily use Docker for local testing, in particular to spin up a Rabbit MQ server.
Call
just setupand modify if so desired the values in
- .env
- .env.docker-vars
- .env.docker-secrets
For demonstration purposes all values should be fine. One may merely need or wish to modify
PATH_LOGS
PYTHON_PATHas well as the HTTP/RABBIT-settings
The main code base can be run in three modes:
- via the cli
- via the API
- via the API within docker
For all sakes and purposes, so that local docker-less execution is possible, call
just buildwhich assumes that the .env file has been correctly set up. One can then either call
just start-serverto start the server (which can be interacted with via Postman and/or cURL commands) or else use the CLI:
just run --help # displays usage
just run version # displays version
just run SEARCH-FS # runs the main featureFor full integration (or at least mocking), activate the docker engine and carry out
just docker-build # builds the application
# optional
just docker-qa # performs qa on the docker image of the main code baseto build the application (once), then use the following commands to start/stop the server within docker:
just docker-start-server
just docker-stop-serverActivate the docker engine and carry out
just docker-build-queue # builds the container for RabbitMQThe following commands start/stop the queue:
just docker-start-queue
just docker-stop-queueEither as a background process or in a separate terminal start the queue (see above). Now call
just docker-register-userswhich creates an admin and guest user.
Open the browers to the following address (see values in your .env file):
http://${HTTP_IP}:${HTTP_PORT_RABBIT_WEB}
log in using
Username: ${HTTP_ADMIN_USER_RABBIT}
Password: ${HTTP_ADMIN_PASSWORD_RABBIT}
In the code base we use
Username: ${HTTP_GUEST_USER_RABBIT}
Password: ${HTTP_GUEST_PASSWORD_RABBIT}
To generate mock data one may call for examples
rm -rf "data/example"
just create-mocks \
--path "data/example" \
--max-depth 10 \
--max-folders 100 \
--max-files 1000which creates a fresh relative directory data/examples,
consisting of at most roughly 1000 files and 100 folders,
and not exceeding a depth of 10.
Fill in setup/requests.yaml as follows:
label: 'Mock example'
options:
reset-queue: true # default is false - whether to clear (sub)queue for task at start of run
# skip-empty: true # false (default) => includes empty files; true => skips them
max-depth: 100 # limits depth of folder structure
max-items: 1_000_000 # limits number of items that can be logged
max-duration: 00:05:00 # limits maximum computation time
data:
# the locaiton of the mock directory
inputs:
location: OS
path: 'data/example'-
Start the queue (see above).
-
Ensure that the queue-users are registered (see above).
-
Use the CLI commands or the API with/without docker (see above).
-
Run the feature:
-
For the CLI option, call
just run SEARCH-FS
-
For the FastApi options (with or without docker), make a POST-call (e.g. in Postman) against the endpoint
/feature/search-fsusing the JSON-body{ "ref": { "location": "OS", "path": "setup/requests.yaml" } }The file reference in this body can of course be a json and located anywhere on your system.
-
Some simple example cases can be found in the demo folder.