This repository contains a multi-layer test automation framework for a network monitoring application. It covers API, UI, integration, database, and optional network-device validation in one project.
This project is a multi-layer automation framework for a network monitoring application.
- Backend: FastAPI + MongoDB
- Frontend: React + Material-UI
- Test layers: API, UI, integration, database, optional network
- Tooling: pytest, Playwright, requests, PyATS, GitHub Actions, Allure
- Clean fixture-driven test setup in
tests/conftest.py - Page Object Model in
pages/ - Data-driven and schema validation coverage
- Environment profiles (
dev,qa,staging) via--env - CI artifacts + published QA dashboard
- Python 3.11+
- Node.js 22+ (see
frontend/package.jsonfor the tested range) - Docker & Docker Compose
- Git
# Clone the repository
git clone https://github.com/elvis-b/network-automation-testing-framework.git
cd network-automation-testing-framework
# Start all services
docker-compose up -d
# Verify services are running
docker-compose ps
# Access the application
# Frontend: http://localhost:3001
# Backend API: http://localhost:5001/api
# API Docs: http://localhost:5001/docsThe UI calls the API through relative /api URLs. The Vite dev server proxies /api to your backend (see frontend/vite.config.js).
- Default proxy target:
http://localhost:5000(matches a localuvicornon port 5000). - If the API is on another origin (e.g. Docker maps 5001→5000), set
VITE_DEV_PROXY_TARGETbeforenpm run dev(seeenv.example).
# Clone and setup
git clone https://github.com/elvis-b/network-automation-testing-framework.git
cd network-automation-testing-framework
# Create virtual environment
python -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
# Install Python dependencies
pip install -r requirements.txt
# Install Playwright browsers (add --with-deps on Linux if install fails)
playwright install chromium firefox
# Install frontend dependencies
cd frontend && npm install && cd ..
# Start MongoDB (requires Docker)
docker run -d -p 27017:27017 --name mongodb mongo:7
# Terminal 1 — backend (pick a port and match VITE_DEV_PROXY_TARGET if not 5000)
cd backend && uvicorn app.main:app --reload --host 0.0.0.0 --port 5000
# Terminal 2 — frontend (Vite serves on http://localhost:3000)
# With Docker Compose API on host port 5001:
# export VITE_DEV_PROXY_TARGET=http://localhost:5001
cd frontend && npm run devWith Docker Compose for the stack, use host URLs from the compose file (e.g. frontend 3001, API 5001) in API_BASE_URL / FRONTEND_BASE_URL for tests, and point the Vite proxy at the same API origin you use in the browser.
# Run all tests
pytest
# Run with verbose output
pytest -v
# Run with HTML report
pytest --html=reports/report.html --self-contained-html# API tests only
pytest tests/api/ -v
# UI tests only
pytest tests/ui/ -v
# Network tests only (requires DevNet access)
pytest tests/network/ -v
# Database tests
pytest tests/database/ -v
# Integration tests
pytest tests/integration/ -v# Smoke tests (fast, critical path)
pytest -m smoke -v
# Smoke tests (explicit current local ports)
API_BASE_URL=http://localhost:5001/api FRONTEND_BASE_URL=http://localhost:3001 pytest -m smoke -v
# Regression tests
pytest -m regression -v
# Visual regression tests
pytest -m visual -v
# Accessibility tests
pytest -m accessibility -v
# Cross-browser tests
pytest -m crossbrowser -v
# Slow tests
pytest -m slow -v# Default profile is dev
pytest -m smoke -v
# Run against QA profile
pytest -m smoke -v --env=qa
# Run against staging profile
pytest -m regression -v --env=staging
# Override specific values on top of profile
API_BASE_URL=https://custom-api.example.com/api pytest -m smoke -v --env=qaProfile files live in config/environments/:
config/environments/.env.devconfig/environments/.env.qaconfig/environments/.env.staging
# Run with automatic retries for flaky tests
pytest tests/ui/ --reruns 3 --reruns-delay 1
# Generate Allure report
pytest --alluredir=reports/allure-results
allure serve reports/allure-results
# Parallel execution
pytest tests/api/ -n 4 # Run on 4 cores
# Run data-driven tests
pytest tests/api/test_data_driven.py -v
# Run schema validation tests
pytest tests/api/test_schema_validation.py -v
# Run network mocking tests
pytest tests/ui/test_network_mocking.py -v# Chromium (default)
pytest tests/ui/ --browser chromium
# Firefox
pytest tests/ui/ --browser firefox
# Webkit (Safari)
pytest tests/ui/ --browser webkit
# Headed mode (visible browser)
pytest tests/ui/ --browser chromium --headedThe GitHub Actions workflow provides:
- Code Quality: Linting with flake8, black, isort
- Backend Tests: API + Database tests with MongoDB service
- UI Tests: Multi-browser matrix (Chromium, Firefox)
- Network Tests: PyATS tests (optional, scheduled)
- Docker Build: Verify containers build successfully
- Artifacts: Screenshots, videos, HTML reports
- Allure QA Dashboard: Consolidated trend/report publishing to GitHub Pages
# Trigger conditions
on:
push: [main, develop]
pull_request: [main, develop]
schedule: "0 0 * * *" # Nightly regressionThis project publishes a QA dashboard from CI using Allure.
- What it shows: pass/fail trends, flaky behavior, suite breakdown, and attached artifacts.
- When it updates: on pushes/scheduled runs (PRs still keep raw artifacts).
- Why it matters: gives a clear quality signal without digging into raw logs.
- Allure dashboard:
https://elvis-b.github.io/network-automation-testing-framework/ - GitHub Actions:
https://github.com/elvis-b/network-automation-testing-framework/actions
pytest --alluredir=reports/allure-results
allure serve reports/allure-resultsStatic captures for the README and portfolio use live in assets/screenshots/.
Other files in the same folder: devices, alerts, add-device dialog, and an Allure suites view.
Elvis Bucatariu
QA Automation Engineer
- GitHub: @elvis-b
- LinkedIn: Elvis Bucatariu



