Create and Deploy your own Optimization API — Part 2

Louis Boguchwal
10 min readOct 23, 2021

Timeless Models for Evolving Objectives

We’re in a world of many perspectives

Which leads to varied objectives

It can be quite a struggle

With all that to juggle

And the question…

Is what’s the directive?

In my previous blog post, I went through how to create and deploy your own optimization API from scratch. This allows us to both cheaply scale our models as well as connect them to websites and applications. But what if we’re approached by a colleague with different, though equally valid, goals to those we originally built into the fabric of our model?

The same underlying question can be answered differently, depending on our primary objective. Sometimes these objectives could even be in conflict with one another. Here are some quick examples:

  • User Growth Strategy — maximizing overall user growth vs. maximizing the growth of a particular user segment
  • Tech Infrastructure Setup — minimizing cost vs. maximizing resilience and redundancy

How often does this happen? All the time!

With this in mind, what would describe the optimal optimization API? (see what I did there? 😉)

  • One codebase — one source of truth: Maintenance, updates, and enhancements happen in one place. This is preferable to competing codebases that are unaware of each other, eventually diverging and contradicting one another.
  • Separation of concern: Different pieces of functionality within a codebase require zero knowledge of how other pieces work, just that they do work. In this sense, pieces of functionality can be combined to “plug and play.” The idea is that one component calls another component that “magically works.”
  • Easily extensible: Small code changes are all that is necessary to extend functionality to accommodate an ever-expanding set of possible objectives. In other words, enhancements that add a new objective should be straightforward to implement.

In other words, the optimal optimization API would be general enough to accommodate multiple use cases, but specific enough to truly address each one.

That’s EXACTLY what we’re going to build. By the end of this tutorial we’ll have an optimization API that fulfills the above three properties.

So without further ado, let’s build it!

Refresher: Implementation Details are in the Previous Blog Post

My last blog post covered how to create an optimization API and deploy it on Google Cloud Run in detail. This post takes the API one step further, and therefore glosses over specific implementation details already covered in the previous post. Specifically, the focus of this post is on the structure of the pyomo optimization scripts. The API implementation is largely the same as before.

Complete code from the previous post can be found here.

Complete code corresponding to this post can be found here.

Motivation: the Hiking Routing Problem

I just returned from a hiking trip, and the trip inspired a routing problem to motivate our API. Let’s say you and a friend are hiking the Appalachian Trail, stopping to camp at pre-planned sites along the way. Each day you do not end where you began. There are multiple trails we could take to go from our origin to our destination.

You look at the map and turn to your friend, suggesting the shortest route (i.e., least total mileage). Your friend instead suggests the flattest route (i.e., the least elevation gain per mile, meaning not steep). In all likelihood, these are different routes.

What should you do? Who’s right?

Well, you both are; it’s just a matter of perspective.

Let’s model the hiking routing problem, accommodating both objectives. That way you and your friend can compare routes depending on the objective in question.

The Hiking Routing Problem: Optimization Models

Let’s take a look at one of the models in full.

The shortest path hiking problem is expressed as the standard shortest path linear programming formulation, which can be found here.

The concept underlying the minimum elevation change per mile problem is actually to minimize the maximum elevation change per mile. Specifically, we would like to hike a route such that the steepest uphill portion isn’t very steep.

To model this we’ll need some quick extensions to the classic shortest path formulation:

  • One variable recording the maximum value of the elevation change among selected arcs in our path, namely the maximum elevation change per mile
  • Constraints that compute the elevation change per mile, making the variables introduced above positive only if the arc is selected in our path
  • Constraints that ensure the maximum elevation change per mile variable is indeed >= all elevation change per mile parameters multiplied by binary arc variables
  • Objective function: minimize the maximum elevation change per mile variable

Here’s the model in full:

Hiking path optimization model: Find the least steep path. Note, this formulation does not account for downhills (i.e., negative elevation gains).

Design Approach: Modularizing our Models

When I build optimization models, I see any and every model (and its code) as a combination of the following components, some of which are model entities, others of which are steps toward obtaining a solution:

  1. Setup and importing packages
  2. Index Sets
  3. Parameters
  4. Decision Variables
  5. Objective function
  6. Constraints
  7. Populate with data
  8. Create specific model instance
  9. Solve model instance
  10. Write solution output

Let’s refer to the above as our ten model components.

If the above model entities and steps are divided into files, we can stitch those components together to obtain one complete model run. So why not apply the same concept to manage multiple models?

The overarching philosophy is to segment our optimization problems into atomic components that can be stitched together at runtime, empowering us to create any specific model in our universe. In our hiking route example, our universe consists of the shortest path model and the minimum elevation change model.

As we stitch our ten components together, let’s think about one component in particular as an example: our decision variables. There is a decision variable in the minimum elevation change model that does not exist in the shortest path model (i.e., the maximum elevation change per mile). Consequently, we need to segment our variables by model type to ensure the model we run has the correct decision variables. To segment the variables, we can have a script declaring the variables that exist in each model:

  • Shortest Path Model Variables
  • Minimum Elevation Change Model Variables

To obtain the decision variables for a particular model, we would run exactly one of the two scripts above.

But what if our universe consisted of 20 models, with objectives such as scenic score or probability of bear encounter? To determine which of 20 scripts to run, for each model component, we might need cumbersome conditional logic.

Can we abstract our way out of this? Yes we can!

Separation of Concern and the Core Model!

If we have one script for each component and model, it is possible some code is duplicated across models. In the hiking routing problem this is certainly the case. For example, binary variables xᵢⱼ, indicating whether an arc (i,j) is included in our path, occur in both hiking models.

We can instead organize our models into core and specific models, for each component:

  • Core Model: Common code across ALL models we could run. In our hiking example, variables xᵢⱼ would be part of the core model.
  • Specific Model: Code that is unique to a specific model ONLY. Let’s refer to each model in our universe as a “specific model.”

This is a powerful way to organize our models for two reasons:

  1. Simplicity: As far as a model runner script is concerned, there are only two models in the world: core and specific!
  2. Separation of concern design pattern: Any specific model we could possibly build needs zero knowledge of the core model.
    — The specific model can simply rely on the existing functionality of the core model.
    — For example, any hiking routing model requires constraints to ensure that only one route is selected. The core hiking model has these constraints, meaning specific hiking models do not (and should not) have them.
    Specific models can (and should) focus on what is unique about them, such as specific constraints for the use-case in question.

As a bonus, the core and specific model organization means there is zero duplicated code.

Now, for each model component, we can deterministically run two commands:

  1. Execute the core model script
  2. Execute the specific model script

Quick Implementation Note

It is not guaranteed that there exists a script for both the core and specific models for every single component.

For example, the core hiking model has no objective function because there is definitionally no common objective among models. Similarly, the shortest path model has no additional decision variables beyond those defined in the core model.

To address this, we can write a simple function to account for the absence of a script:

This function executes a script if a file path is passed, and does nothing if not. Now we can safely run core and specific model scripts for every single component.

Running the Specific Scripts for the Model We Want

Now we can write a model runner script, but how will that script know which specific model to solve? So far, the script just knows to run a “specific model script” for each component.

Let’s address this in two steps:

  1. Tell the model runner script which model we would like to solve
  2. Determine the scripts that correspond to the model indicated in (1)

Tell the Model Runner Script our Model of Choice

The script model_runner.py stitches various component scripts together, ultimately running one of our hiking routing models.

To tell this script the model we would like to generate and solve, we can use a command line argument. I’m going to call mine — -model, which will be followed by a string that indicates the model name.

Let’s make the model_runner.py script aware of the argument — -model using the argparse package. Specifically, the following code snippet allows the script to parse — -model into a variable:

We can see this code at the top of model_runner.py here.

Determine the Scripts to Run

Now let’s determine the scripts to run to ensure we have the model components that correspond to — -model

One approach is to use if-statements for each component script based on the model name. However, this would become unreadable and unwieldy rather quickly (even with 5 models). We can do better!

The grand plan:

  • Use a lookup table
  • Read this table into a pandas dataframe
  • Convert this dataframe into dictionaries, one for each component
  • Leverage these dictionaries throughout model_runner.py

Let’s use a lookup table that details one component script name for each model name:

Lookup table of script names by model name and component. Not all components are included in this image out of space considerations, meaning the true lookup table has more columns.

For example, the script of variables for the minimum elevation change model is hiking_min_max_elevation_change_variables.py.

Notice we have three rows, because we have two specific models as well as the core model.

Also notice that some cells are empty (NULL). Some models do not have a script for a particular component, and this structure ensures we can create dictionaries that are one-to-one mappings from model to script name, for each component. Our executor function, described above, handles these cases.

Creating Dictionaries

Now let’s read in our lookup table and create one dictionary for each model component.

Here is an example for the index sets component:

The same approach is used for each component.

Quick Implementation Note

Why do I create dictionaries component by component, rather than looping through? I find named dictionaries to be more readable code.

Additionally, adding or removing components would be extremely rare (e.g., every model must have index sets). Consequently, dynamic numbers and names of dictionaries would introduce unnecessary complexity.

The implementation here means we can explicitly see and understand the dictionaries, and therefore model components, we use.

Putting it Together

The hardest part of our model runner script is done! For every dictionary we created we will pull the appropriate value using the model name as our key. As a quick reminder, our model name came from a command line argument.

The rest of model_runner.py is simply two commands for each model component, one for the core model and one for the specific model! For example, here is the code for the index set component:

The complete model_runner.py can be found here.

Run each Model, Locally

Now we have a model runner script. We can run each model by changing the command line argument to — -model . Open a terminal and navigate to the directory containing the model files. To run each of the two models, type the following:

python hiking_model_runner.py --model shortest_pathpython hiking_model_runner.py --model minimum_elevation_change

(*Note that for windows users the command starts with “py” rather than “python.”)

You can run these commands using my input data.

Let’s Make it an API

The API implementation is very similar to that of the previous blog post, with a few exceptions. I’ll call out these exceptions here. For additional details on APIs, implementation, and deployment please consult the previous blog post.

Our Hiking Path Optimization API running locally

Running Python from Plumber

Our hiking models require a command line argument, where our puppy petting optimization problem last time did not. Therefore, our API must take a query parameter. Let’s call this parameter “model_type.”

Our R plumber API will run hiking_model_runner.py, using the query parameter. Specifically, let’s concatenate the parameter with the python script, like so:

This code is part of plumber.R, in the mathematical_hiking API endpoint

A More Generic Dockerfile

In my previous post my Dockerfile copied explicit files. I have now realized I can improve my Dockerfile by copying all files with a specific file extension. For example:

The Dockerfile can be found here.

Dynamic Threads for Solving an Optimization Model

In my previous post I specified 2 threads in pyomo so the cbc solver runs on 2 threads. If we decide to deploy on a service with a different number of threads per instance, then we would have to change that 1 line of code, rebuild the Docker image, and redeploy.

This is not ideal. I realized that we can improve upon this.

We can detect the number of threads on the server where our API resides and pass that number of threads to cbc:

This is hiking_core_solve_model.py.

Now changing the specs of where we deploy will not require code changes!

Last Thoughts

We took our optimization APIs to the next level. Now we can create and deploy an API that accommodates an evolving set of objectives. Extending this API to a new objective is simple, requiring minimal new code and an update to the lookup table.

On top of that, this all happens in one maintainable codebase.

I crafted this approach when thinking about how to make my models more timeless. Any feedback on both how this approach could be improved or extended, as well as how readers benefited would be greatly appreciated. If you’ve built a nifty optimization API using this article I want to hear about it!

--

--