Skip to content

ihalhashem/rcs-simulation-benchmarks

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 

Repository files navigation

RCS Simulation Benchmarks

This repository documents a validated workflow for radar cross section (RCS) simulation of airborne platforms using Ansys HFSS (Hybrid IE, SBR, and SBR+). The primary public benchmark in this repo is a DJI Phantom 3 quadcopter model, where simulated RCS patterns and distributions are compared against published measurement/simulation results. The goal is reproducibility: clear solver settings, geometry preparation steps, and runtime/accuracy trade-offs.

In addition to the public Phantom 3 benchmark, similar characterization was performed for other airframes (e.g., a low-observable bomber-class platform and a MALE ISR/UCAV-class platform). Those additional results are not published here and are available upon request.

HighlightsScopeDJI Phantom 3HFSS SetupResultsLiterature ComparisonFrequency SweepNotes


Highlights

  • End-to-end HFSS RCS pipeline: CAD cleanup → material assignment → illumination/scan setup → solver selection → sweep → post-processing.
  • Two solver configurations: Basic Hybrid IE + SBR vs Full SBR+ (Advanced) with explicit runtime implications.
  • Validation mindset: sanity-check geometry/pose conventions and compare patterns against published results.
  • Deliverable-ready outputs: polar cuts, sweep visuals (2D/3D), and summary figures.

Simulating diverse RCS scenarios

This work started as a hands-on exploration of how different physical and numerical choices influence RCS outcomes. There isn’t “one specific build” to inspect—so the goal is generality: develop a workflow that is repeatable, and validate it against a public benchmark before moving to more complex airframes.


DJI Phantom 3 benchmark

Real target vs CAD model (public benchmark platform)

CAD model from: https://grabcad.com/library/dji-phantom3-1

CAD imported into HFSS + model overview demo

The Phantom 3 model is imported into HFSS and used as the first RCS case. A short rotation/overview demo is included to sanity-check geometry orientation and posture.

Demo GIF: media/phantom3_model_overview.gif

Why this model

This particular platform is used as a starting point for ease of comparison: a peer-reviewed reference already analyzes the same target and provides measured data (anechoic chamber) along with a disclosed simulation configuration. That enables two validations: (1) verify that the simulation workflow produces the expected behavior, and (2) cross-check the simulated values against measured ones.

Material assumption used here (and why)

For this benchmark, the target is modeled as PEC as a deliberate simplification to keep the first public validation focused on solver behavior, geometry/orientation sanity, and reproducibility. For higher-fidelity work on real platforms, material assignment is matched to reality as far as practical (plastics, composites, metals, radomes, etc.).


HFSS setup: solvers and excitation

Monostatic HH RCS + solver configurations

All Phantom 3 simulations here are monostatic HH RCS using two hybrid configurations:

  • Basic: Hybrid IE + SBR (no creeping waves / no wedge correction)
  • Advanced: Full SBR+ with scattering features enabled, using a balanced speed–accuracy tradeoff

Incident plane wave + HH polarization setup


Results and runtime tradeoffs

Position sanity + basic SBR results

Position sanity + full SBR+ (advanced) results

Runtime impact (Basic vs Advanced)

The advanced SBR+ configuration is dramatically slower in this benchmark (≈ 72.55× in the recorded run), which matters when you scale to frequency sweeps, finer angular sampling, or more complex airframes.


Literature comparison (patterns + distributions)

Reference plots vs solver variants

“The paper and I” — numeric sanity and shape agreement

The intent is not to claim perfect agreement from a simplified PEC benchmark, but to validate that: (1) coordinate conventions are correct, (2) aspect-dependent behavior looks physically plausible, and (3) solver choices move results in expected directions.


Frequency sweep demo (Basic SBR)

A frequency sweep is shown for the Phantom 3 using the Basic SBR configuration, including both 3D and 2D views.


Notes & limitations

  • The public Phantom 3 benchmark intentionally uses simplifying assumptions (e.g., PEC) to validate workflow mechanics and solver behavior.
  • For real platforms, material assignment, small details, and configuration fidelity matter and should be matched to the target and the measurement setup. All of this was done on the other airframes, and are available upon request. Benchmark reference (measurements + simulation on DJI Phantom family):
    P. J. Speirs, A. Murk, M. Renker, P. Wellig, and U. Aulenbacher,
    High-detail simulations of consumer-grade UAV RCS signatures, and comparisons against measurements,”
    in Drone Detectability: Modelling the Relevant Signature (NATO STO Meeting Proceedings, STO-MP-MSG-SET-183),
    Virtual (WebEx), 27 April 2021, Paper MP-MSG-SET-183-03, pp. 1–26.
    PDF: https://publications.sto.nato.int/publications/STO%20Meeting%20Proceedings/STO-MP-MSG-SET-183/MP-MSG-SET-183-03.pdf

Contact / Data Access

Simulation files are available upon request.
Email: rawashdeh758@gmail.com.

About

Validated HFSS RCS workflow (Hybrid IE, SBR, SBR+) with a DJI Phantom 3 benchmark vs published measurements; includes solver/runtime trade-offs and reproducible settings.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages