pybe.benchmark#

class pybe.benchmark.Benchmark(benchmark_csv_file_path: str | None = None)[source]#

Bases: object

Benchmark any Python function

pybe.Benchmark allows you to:

  • benchmark any Python function (with vectors of real numbers as output)

  • store the results in a csv (default) or excel file and

  • read from previous benchmark results.

How it works: Specify a list of inputs and apply a given function to those inputs a specified number of times

__call__(function: Callable[[...], Dict[str, float]], inputs: List[float | str], name: str, number_runs: int = 10, store: bool = True, parallel: bool = False)[source]#

Benchmark a function

Parameters:
  • function (Callable[..., Dict[str, float]]) – function to be benchmarked which returns a dictionary with keys the _name of the output (string) and value the value of the function (float)

  • inputs (List[Union[str, float]]) – inputs on which the function is to be benchmarked stored as a list of strings or floats

  • number_runs (int) – number of runs for each inputs

  • store (bool) – if true, store the output of the benchmark as a benchmark.yaml

  • parallel (bool) – if true, run the benchmark on parallel (using multiprocessing)

Examples

Initialize the benchmark class.

>>> benchmark = Benchmark()

Define the function to be benchmarked. This function must take a single argument (float or string) and return a dictionary where each key represents one output

>>> def test_function(i: int):
>>>     return {"value": i}

Define the list of inputs

>>> inputs = [1, 2, 3]

Specify the number of runs

>>> number_runs=3

Run the benchmark

>>> benchmark(function=test_function,
...           name="test-benchmark",
...           inputs=inputs,
...           number_runs=number_runs)
property inputs: List[float | str]#

Return the list of inputs of the benchmark

Returns:

inputs of the benchmark

Return type:

List[Union[str, float]]

property means: DataFrame#

Return the means of the outputs as pandas DataFrame

Returns:

means of the benchmark

Return type:

pd.DataFrame

property name: str#

Return the name of the benchmark

Returns:

name of the benchmark

Return type:

str

property name_outputs: List[str]#

Return the list of names of outputs of the benchmark

Returns:

name of outputs

Return type:

List[str]

read_from_csv(benchmark_csv_file_path: str)[source]#

Read previous results of corresponding yaml file and store them in this instance

Parameters:

benchmark_csv_file_path (str) – path of benchmark yaml file

Examples

>>> benchmark = Benchmark() # initialize benchmark instance
>>> benchmark.read_from_csv(benchmark_csv_file_path="./benchmark.csv") # read results
>>> print(benchmark.result) # print result
property result: DataFrame#

Return the result of the benchmark as a pandas DataFrame

Returns:

result of the benchmark

Return type:

pd.DataFrame

return_outputs(input: float)[source]#
property std: DataFrame#

Return the standard deviation of the outputs as pandas DataFrame

Returns:

std of the benchmark

Return type:

pd.DataFrame

to_csv(name: str = 'benchmark')[source]#

Save results to csv file (file path is file path of script)

Parameters:

name (str) – name of the benchmark

to_excel(name: str = 'benchmark')[source]#

Save results to excel (xlsx) file (file path is file path of script)

Parameters:

name (str) – name of the benchmark