-
Notifications
You must be signed in to change notification settings - Fork 66
Immersed Boundary Method Implementation with Examples #115
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR implements Immersed Boundary Method (IBM) support with post-processing capabilities and example applications. The implementation enables simulation of complex solid objects immersed in fluid domains without requiring body-conforming meshes.
Key Changes:
- Added IBM stepper with iterative force correction scheme using Warp kernels
- Implemented post-processing operators for vorticity and Q-criterion visualization
- Added utility functions for USD export, mesh handling, and force calculation
- Created example applications demonstrating IBM for various geometries (sphere, airfoil, car, wind turbine)
Reviewed Changes
Copilot reviewed 18 out of 20 changed files in this pull request and generated 7 comments.
Show a summary per file
| File | Description |
|---|---|
| xlb/operator/stepper/ibm_stepper.py | New IBM stepper with Peskin weighting and iterative force correction |
| xlb/operator/postprocess/vorticity.py | Vorticity computation using central differences |
| xlb/operator/postprocess/q_criterion.py | Q-criterion field calculation for vortex identification |
| xlb/operator/postprocess/grid_to_point.py | Trilinear interpolation from grid to arbitrary points |
| xlb/helper/ibm_helper.py | Mesh transformation, Voronoi area calculation, and IBM field setup |
| xlb/utils/utils.py | USD export, colorization, and visualization utilities |
| xlb/default_config.py | Added Warp max_unroll configuration |
| examples/ibm/*.py | Four example applications with drag/lift coefficient validation |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| DefaultConfig.default_precision_policy = default_precision_policy | ||
|
|
||
| if default_backend == ComputeBackend.WARP: | ||
| import warp as wp |
Copilot
AI
Nov 13, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The wp.max_unroll = 32 setting should be documented with a comment explaining why this specific value is chosen and what impact it has on compilation/performance. This magic number lacks context for future maintainers.
| import warp as wp | |
| import warp as wp | |
| # Set the maximum loop unroll factor for WARP kernels. | |
| # The value 32 is chosen as it matches the typical warp size on NVIDIA GPUs, | |
| # which can improve performance by aligning with hardware execution units. | |
| # Increasing this value may lead to longer compilation times and higher code size, | |
| # while decreasing it could reduce performance. Adjust only if you understand the tradeoffs. |
| vertex_areas_wp = wp.zeros(num_vertices, dtype=wp.float32) | ||
|
|
||
| # Launch the kernel | ||
| wp.launch(kernel=voronoi_area_kernel, dim=num_faces, inputs=[faces_wp, vertices_wp, face_areas_wp, vertex_areas_wp], device="cuda") |
Copilot
AI
Nov 13, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The device is hardcoded to 'cuda'. This should be configurable or use the default device to support CPU-only systems and multi-GPU setups. Consider adding a device parameter to the function.
xlb/helper/ibm_helper.py
Outdated
| # First normalize the mesh to the desired LBM length | ||
| # max_length = mesh.extents.max() | ||
| # normalize_scale = max_lbm_length / max_length | ||
| # mesh.apply_scale(normalize_scale) | ||
|
|
||
| # # Apply additional transformations | ||
| # mesh = transform_mesh(mesh, translation=translation, rotation=rotation, rotation_order=rotation_order, scale=scale) | ||
|
|
Copilot
AI
Nov 13, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Large blocks of commented-out code should be removed. If this normalization logic might be needed in the future, it should be documented in a comment rather than left as dead code.
| # First normalize the mesh to the desired LBM length | |
| # max_length = mesh.extents.max() | |
| # normalize_scale = max_lbm_length / max_length | |
| # mesh.apply_scale(normalize_scale) | |
| # # Apply additional transformations | |
| # mesh = transform_mesh(mesh, translation=translation, rotation=rotation, rotation_order=rotation_order, scale=scale) | |
| # Mesh normalization to max_lbm_length and additional transformations were previously performed here. | |
| # If normalization or transformation is needed in the future, consider: | |
| # max_length = mesh.extents.max() | |
| # normalize_scale = max_lbm_length / max_length | |
| # mesh.apply_scale(normalize_scale) | |
| # mesh = transform_mesh(mesh, translation=translation, rotation=rotation, rotation_order=rotation_order, scale=scale) |
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
hsalehipour
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great. I already communicated my comments and they have been applied.
Contributing Guidelines
Description
Type of change
How Has This Been Tested?
Linting and Code Formatting
Make sure the code follows the project's linting and formatting standards. This project uses Ruff for linting.
To run Ruff, execute the following command from the root of the repository:
ruff check .