diff --git a/tutorials/introduction-to-solverbenchmark/Project.toml b/tutorials/introduction-to-solverbenchmark/Project.toml index c032d60..0936528 100644 --- a/tutorials/introduction-to-solverbenchmark/Project.toml +++ b/tutorials/introduction-to-solverbenchmark/Project.toml @@ -5,6 +5,7 @@ Printf = "de0858da-6303-5e67-8744-51eddeeeb8d7" PyPlot = "d330b81b-6aea-500a-939a-2ce795aea3ee" Random = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c" SolverBenchmark = "581a75fa-a23a-52d0-a590-d6201de2218a" +SolverCore = "ff4d7338-4cf1-434d-91df-b86cb86fb843" [compat] @@ -12,3 +13,4 @@ DataFrames = "1.3.4" Plots = "1.31.7" PyPlot = "2.10.0" SolverBenchmark = "0.5.3" +SolverCore = "0.3" diff --git a/tutorials/introduction-to-solverbenchmark/index.jmd b/tutorials/introduction-to-solverbenchmark/index.jmd index df2099b..3d7dc66 100644 --- a/tutorials/introduction-to-solverbenchmark/index.jmd +++ b/tutorials/introduction-to-solverbenchmark/index.jmd @@ -221,3 +221,23 @@ p = profile_solvers(stats, costs, costnames) Here is a useful tutorial on how to use the benchmark with specific solver: [Run a benchmark with OptimizationProblems](https://jso.dev/OptimizationProblems.jl/dev/benchmark/) The tutorial covers how to use the problems from `OptimizationProblems` to run a benchmark for unconstrained optimization. + +### Handling `solver_specific` in stats +If a solver's GenericExecutionStats contains a `solver_specific` dictionary +, a column is created for +each key in that dictionary in the per-solver `DataFrame`. + +Here is a example showing how to set a solver-specific flag and then access it for tabulation: +```julia +using NLPModelsTest, DataFrames, SolverCore, SolverBenchmark + +function newton(nlp) + stats = GenericExecutionStats(nlp) + set_solver_specific!(stats, :isConvex, true) + return stats +end + +solvers = Dict(:newton => newton) +problems = [NLPModelsTest.BROWNDEN()] +stats = bmark_solvers(solvers, problems) +``` \ No newline at end of file