From 7690995de5ebbc15fec9284c92b39392c2aa9f05 Mon Sep 17 00:00:00 2001 From: Arnav Kapoor Date: Thu, 30 Oct 2025 08:15:28 +0530 Subject: [PATCH 1/5] solver benchmark: specifying solver-specific options --- .../introduction-to-solverbenchmark/index.jmd | 31 +++++++++++++++++++ 1 file changed, 31 insertions(+) diff --git a/tutorials/introduction-to-solverbenchmark/index.jmd b/tutorials/introduction-to-solverbenchmark/index.jmd index df2099b..afe0cdd 100644 --- a/tutorials/introduction-to-solverbenchmark/index.jmd +++ b/tutorials/introduction-to-solverbenchmark/index.jmd @@ -221,3 +221,34 @@ p = profile_solvers(stats, costs, costnames) Here is a useful tutorial on how to use the benchmark with specific solver: [Run a benchmark with OptimizationProblems](https://jso.dev/OptimizationProblems.jl/dev/benchmark/) The tutorial covers how to use the problems from `OptimizationProblems` to run a benchmark for unconstrained optimization. + +## Handling `solver_specific` in benchmark runs + +`SolverBenchmark` accepts solver-specific options via the keyword argument `solver_specific` on benchmarking functions (for example, when calling the high-level benchmarking helpers such as `bmark_solvers`). + +- `solver_specific` is a mapping that associates a solver identifier (typically a `Symbol` or the solver name you use in the benchmark) with a dictionary of options specific to that solver implementation. +- Those options are passed through to the solver when the run is executed and are recorded as part of the run metadata. +- As a result, runs that differ only by solver-specific options are treated as distinct entries in the benchmark results; you can group, filter or compare them in the produced `DataFrame`s. + +```julia +using SolverBenchmark, NLPModelsTest, JSOSolvers + +solver_specific = Dict( + :IPOPT => Dict("tol" => 1e-8, "max_iter" => 200), + :KNITRO => Dict("maxit" => 500) +) + +results = bmark_solvers(solvers, problems; solver_specific = solver_specific) + +first(results) +``` + +Notes and tips: + +- If you pass different option sets for the same solver, make sure the keys you use + identify the intended solver variation unambiguously (e.g., distinct Symbols). +- When creating tables or profiles, you can add or extract columns from the per-solver + DataFrames to show which solver-specific options were used for each run. +- If you prefer the solver-specific settings to appear as explicit columns, you can + preprocess the DataFrames (extract the dict entries into columns) before creating + joined tables or profile plots. \ No newline at end of file From 9ad3b2b803432a6fffa66bc4b6dd970e0c87e8ca Mon Sep 17 00:00:00 2001 From: Arnav Kapoor Date: Thu, 30 Oct 2025 17:50:19 +0530 Subject: [PATCH 2/5] Redoing index.jmd and Project.toml to fix SolverBenchmark tutorial --- Project.toml | 3 ++ .../introduction-to-solverbenchmark/index.jmd | 47 ++++++++++--------- 2 files changed, 27 insertions(+), 23 deletions(-) diff --git a/Project.toml b/Project.toml index b43fc31..e526f0e 100644 --- a/Project.toml +++ b/Project.toml @@ -5,9 +5,12 @@ version = "0.2.0" [deps] Colors = "5ae59095-9a9b-59fe-a467-6f913c188581" +DataFrames = "a93c6f00-e57d-5684-b7b6-d8193f3e46c0" IJulia = "7073ff75-c697-5162-941a-fcdaad2a7d2a" InteractiveUtils = "b77e0a4c-d291-57a0-90e8-8db25a27a240" +JSOSolvers = "10dff2fc-5484-5881-a0e0-c90441020f8a" Markdown = "d6f4376e-aef5-505a-96c1-9c027394607a" +NLPModelsTest = "7998695d-6960-4d3a-85c4-e1bceb8cd856" Pkg = "44cfe95a-1eb2-52ea-b672-e2afdf69b78f" Weave = "44d3d7a6-8a23-5bf8-98c5-b353f8df5ec9" YAML = "ddb6d928-2868-570f-bddf-ab3f9cf99eb6" diff --git a/tutorials/introduction-to-solverbenchmark/index.jmd b/tutorials/introduction-to-solverbenchmark/index.jmd index afe0cdd..27cc54c 100644 --- a/tutorials/introduction-to-solverbenchmark/index.jmd +++ b/tutorials/introduction-to-solverbenchmark/index.jmd @@ -223,32 +223,33 @@ Here is a useful tutorial on how to use the benchmark with specific solver: The tutorial covers how to use the problems from `OptimizationProblems` to run a benchmark for unconstrained optimization. ## Handling `solver_specific` in benchmark runs +If a solver's execution-stats object contains a `solver_specific` dictionary +(accessible as `s.solver_specific`), `solve_problems` will create columns for +each key in that dictionary in the per-solver `DataFrame`. (Note: `bmark_solvers` +forwards keyword arguments to each solver, so ensure your solver wrapper populates +`s.solver_specific` if you want those columns.) -`SolverBenchmark` accepts solver-specific options via the keyword argument `solver_specific` on benchmarking functions (for example, when calling the high-level benchmarking helpers such as `bmark_solvers`). - -- `solver_specific` is a mapping that associates a solver identifier (typically a `Symbol` or the solver name you use in the benchmark) with a dictionary of options specific to that solver implementation. -- Those options are passed through to the solver when the run is executed and are recorded as part of the run metadata. -- As a result, runs that differ only by solver-specific options are treated as distinct entries in the benchmark results; you can group, filter or compare them in the produced `DataFrame`s. - +Here is a example showing how to set a solver-specific flag and then access it for tabulation: ```julia -using SolverBenchmark, NLPModelsTest, JSOSolvers - -solver_specific = Dict( - :IPOPT => Dict("tol" => 1e-8, "max_iter" => 200), - :KNITRO => Dict("maxit" => 500) -) +using NLPModelsTest, DataFrames, SolverCore, SolverBenchmark -results = bmark_solvers(solvers, problems; solver_specific = solver_specific) +function newton(nlp) + stats = GenericExecutionStats(nlp) + set_solver_specific!(stats, :isConvex, true) + return stats +end -first(results) -``` +solvers = Dict(:newton => newton) +problems = [NLPModelsTest.BROWNDEN()] +stats = bmark_solvers(solvers, problems) -Notes and tips: +combined_stats = DataFrame( + name = stats[:newton].name, + nvars = stats[:newton].nvar, + convex = stats[:newton].isConvex, + newton_iters = stats[:newton].iter, +) -- If you pass different option sets for the same solver, make sure the keys you use - identify the intended solver variation unambiguously (e.g., distinct Symbols). -- When creating tables or profiles, you can add or extract columns from the per-solver - DataFrames to show which solver-specific options were used for each run. -- If you prefer the solver-specific settings to appear as explicit columns, you can - preprocess the DataFrames (extract the dict entries into columns) before creating - joined tables or profile plots. \ No newline at end of file +combined_stats.convex = string.(combined_stats.convex) +pretty_stats(combined_stats) +``` \ No newline at end of file From f0721cd9e49b017475f48f69d4221d70f85de369 Mon Sep 17 00:00:00 2001 From: Arnav Kapoor Date: Thu, 6 Nov 2025 06:46:28 +0530 Subject: [PATCH 3/5] Apply suggestions from code review Co-authored-by: Tangi Migot --- tutorials/introduction-to-solverbenchmark/index.jmd | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/tutorials/introduction-to-solverbenchmark/index.jmd b/tutorials/introduction-to-solverbenchmark/index.jmd index 27cc54c..6ee27ee 100644 --- a/tutorials/introduction-to-solverbenchmark/index.jmd +++ b/tutorials/introduction-to-solverbenchmark/index.jmd @@ -222,12 +222,10 @@ Here is a useful tutorial on how to use the benchmark with specific solver: [Run a benchmark with OptimizationProblems](https://jso.dev/OptimizationProblems.jl/dev/benchmark/) The tutorial covers how to use the problems from `OptimizationProblems` to run a benchmark for unconstrained optimization. -## Handling `solver_specific` in benchmark runs -If a solver's execution-stats object contains a `solver_specific` dictionary -(accessible as `s.solver_specific`), `solve_problems` will create columns for -each key in that dictionary in the per-solver `DataFrame`. (Note: `bmark_solvers` -forwards keyword arguments to each solver, so ensure your solver wrapper populates -`s.solver_specific` if you want those columns.) +### Handling `solver_specific` in stats +If a solver's GenericExecutionStats contains a `solver_specific` dictionary +, a column is created for +each key in that dictionary in the per-solver `DataFrame`. Here is a example showing how to set a solver-specific flag and then access it for tabulation: ```julia From 9794f565cccc7bf9e76552248e6c683827e82efe Mon Sep 17 00:00:00 2001 From: Arnav Kapoor Date: Thu, 6 Nov 2025 09:40:25 +0530 Subject: [PATCH 4/5] changes acc. to review --- Project.toml | 5 +---- tutorials/introduction-to-solverbenchmark/Project.toml | 2 ++ tutorials/introduction-to-solverbenchmark/index.jmd | 10 ---------- 3 files changed, 3 insertions(+), 14 deletions(-) diff --git a/Project.toml b/Project.toml index e526f0e..8d7a681 100644 --- a/Project.toml +++ b/Project.toml @@ -5,12 +5,9 @@ version = "0.2.0" [deps] Colors = "5ae59095-9a9b-59fe-a467-6f913c188581" -DataFrames = "a93c6f00-e57d-5684-b7b6-d8193f3e46c0" IJulia = "7073ff75-c697-5162-941a-fcdaad2a7d2a" InteractiveUtils = "b77e0a4c-d291-57a0-90e8-8db25a27a240" -JSOSolvers = "10dff2fc-5484-5881-a0e0-c90441020f8a" Markdown = "d6f4376e-aef5-505a-96c1-9c027394607a" -NLPModelsTest = "7998695d-6960-4d3a-85c4-e1bceb8cd856" Pkg = "44cfe95a-1eb2-52ea-b672-e2afdf69b78f" Weave = "44d3d7a6-8a23-5bf8-98c5-b353f8df5ec9" YAML = "ddb6d928-2868-570f-bddf-ab3f9cf99eb6" @@ -20,4 +17,4 @@ Colors = "0.12" IJulia = "1" Weave = "0.10" YAML = "0.4" -julia = "1.6" +julia = "1.6" \ No newline at end of file diff --git a/tutorials/introduction-to-solverbenchmark/Project.toml b/tutorials/introduction-to-solverbenchmark/Project.toml index c032d60..0936528 100644 --- a/tutorials/introduction-to-solverbenchmark/Project.toml +++ b/tutorials/introduction-to-solverbenchmark/Project.toml @@ -5,6 +5,7 @@ Printf = "de0858da-6303-5e67-8744-51eddeeeb8d7" PyPlot = "d330b81b-6aea-500a-939a-2ce795aea3ee" Random = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c" SolverBenchmark = "581a75fa-a23a-52d0-a590-d6201de2218a" +SolverCore = "ff4d7338-4cf1-434d-91df-b86cb86fb843" [compat] @@ -12,3 +13,4 @@ DataFrames = "1.3.4" Plots = "1.31.7" PyPlot = "2.10.0" SolverBenchmark = "0.5.3" +SolverCore = "0.3" diff --git a/tutorials/introduction-to-solverbenchmark/index.jmd b/tutorials/introduction-to-solverbenchmark/index.jmd index 6ee27ee..3d7dc66 100644 --- a/tutorials/introduction-to-solverbenchmark/index.jmd +++ b/tutorials/introduction-to-solverbenchmark/index.jmd @@ -240,14 +240,4 @@ end solvers = Dict(:newton => newton) problems = [NLPModelsTest.BROWNDEN()] stats = bmark_solvers(solvers, problems) - -combined_stats = DataFrame( - name = stats[:newton].name, - nvars = stats[:newton].nvar, - convex = stats[:newton].isConvex, - newton_iters = stats[:newton].iter, -) - -combined_stats.convex = string.(combined_stats.convex) -pretty_stats(combined_stats) ``` \ No newline at end of file From a6e5aa63910b42e4c282558d620a6ab8641cd190 Mon Sep 17 00:00:00 2001 From: Arnav Kapoor Date: Thu, 6 Nov 2025 09:41:20 +0530 Subject: [PATCH 5/5] Fix newline issue at end of Project.toml --- Project.toml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Project.toml b/Project.toml index 8d7a681..b43fc31 100644 --- a/Project.toml +++ b/Project.toml @@ -17,4 +17,4 @@ Colors = "0.12" IJulia = "1" Weave = "0.10" YAML = "0.4" -julia = "1.6" \ No newline at end of file +julia = "1.6"