Create and Deploy your own Optimization API

(with Plumber, Pyomo, Docker, and Google Cloud Run)

Louis Boguchwal
42 min readMar 9, 2021

You’ve built a mathematical optimization model; now what? What good is it if others can’t use it? How would you make it accessible to users, regardless of their technical acumen? How would you incorporate it into various web and mobile applications across different devices and operating systems?

You’ve got some optimization models under your belt, but you’re new to the world of deployment. You can code the model, but don’t know how to get that model into a website or application.

Not long ago this was me. My online research presented me with only two options, each with limitations:

  1. Commercial: A commercial optimization solver with a cloud offering (e.g., Gurobi)
  2. Academic: NEOS Server, a high performance computing server that supports multiple modeling languages and solvers, hosted by the University of Wisconsin Madison

The commercial option is quite costly (thousands of dollars).

NEOS is for academic use and industry experimentation. By submitting a model you give up your intellectual property to that model. Additionally, commercial use of the service is prohibited because of license terms with commercial solvers NEOS offers.

Over the 2020 holidays I crafted a third option that combines the best of both worlds: create and deploy YOUR OWN optimization API!

This DIY, open-source approach is free until you have thousands of users consuming your model, which is a good problem to have!

This article will show you how to build an optimization API from scratch using the following technologies:

  1. Pyomo: A python package for mathematical optimization, built by Sandia National Labs
  2. Plumber: An R package for creating APIs out of R functions, built by Barret Schloerke
  3. Docker: A way to deploy code so things work agnostic of operating system
  4. Google Cloud Run — A fully managed autoscaling hosting service

Table of Contents

What’s an API?

What’s Pyomo?

What’s Plumber?

What’s Docker?

What’s Google Cloud Run?

Step 1: Example Optimization Model

Step 2: Create a Plumber API

Step 3: Test the API Locally

Step 4: Dockerize the API

Step 5: Deploy to Google Cloud Run

Step 6: Invoke Your API

Last Thoughts

Before we get going, let’s make sure we’re on the same page. What are these technologies, and how do they help us make our optimization dream a reality?

And yes, you read that right: R AND Python. We’re going to use each language where it’s strongest.

What’s an API?

Our ultimate goal is to productize an optimization model so others can use it. Ideally, the consumers of our model shouldn’t have to know anything about optimization, nor the programming languages we used. To them, we’ve created a mysterious, magical box over the web that just works.

This is where APIs come in. API stands for Application Programming Interface. Put another way, this is how programs and applications talk to each other over the Internet. Quite simply, you can leverage one piece of software from another by calling a function that’s provided online. For example, let’s say we wanted to know the population of a US state according to the last US Census. We can imagine a function census_state_population() provided by the US Census Bureau that takes a 2-letter state abbreviation as an argument. Note the following:

  1. We don’t know what language this function was written in, and we don’t have to!
  2. This function doesn’t know what language we’re coding in, and it doesn’t have to!

In other words, APIs are “language agnostic in both directions.” If we’d like to obtain the population of New York state for some analysis coded in any language, all we have to do is invoke the function and pass NY. I’ll go through the specifics of how to invoke APIs in practice later in this post.

Beyond widespread accessibility, APIs offer data scientists, data engineers, and software engineers alike the opportunity to focus on what they do best. Instead of creating massive, monolithic services that do everything, we can break up components into microservices that focus on very specific tasks. That way one service can be used all over the place, even in ways that its creators never imagined!

For example, if you’ve ever used an app with a map (e.g., a ridesharing app) chances are that app uses a map API. What’s cool about this is:

  1. The ridesharing app doesn’t have to reinvent the mapping wheel
  2. The mapping component can be leveraged when its development team never thought of ridesharing as a use-case for maps

Bearing this example in mind, Google Maps launched in 2005, whereas Uber launched 4 years later in 2009.

All this is to make the case for APIs as a mechanism to deploy our optimization model to others.

What’s Pyomo?

Pyomo is a Python package for mathematical optimization, aka mixed integer programming. Specifically, Pyomo provides a high-level modeling language with which to express optimization models. Additionally, Pyomo allows the modeler to replace the optimization solver in 1 line of code, thereby reducing the complexity of testing, benchmarking, and model maintenance.

In my personal experience, the Pyomo language is similar to AMPL.

What’s Plumber?

Plumber is an R package that allows you to turn your R functions into web APIs with some simple code decoration. The upshot is that any R function you come up with can become an API!

What’s Docker?

I don’t know about you, but Docker is one of those things I’ve heard so much about, but until recently knew absolutely nothing about. Let’s start with the idea of a container, and take a look at Docker’s definition from its own website:

A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another. A Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings.

I have no idea what that means… So let’s turn to cake metaphors, which make most things abundantly clear, relatable, and delicious.

Let’s say there’s this incredible cake we’d like to taste EXACTLY the same anytime we bake it, anywhere in the world. If we have a friend who would like to make this cake at home we must equip them with the following:

  • Ingredients: The exact list of ingredients, down to the brand and where to shop
  • Equipment: The specific oven and utensils needed
  • Recipe: Precisely what to do with the ingredients, and in what order. Are we to stir? Mix? What oven settings should we use?

In our case, we want to make a software cake. The ingredients are applications and constituent packages (e.g., R, python, Pyomo, etc.). Beyond that, we need to specify versions. As we all know, Python 3.8 might taste a bit different from python 3.7, and we can’t have that.

Our equipment is our operating system. After all, a Debian vs. Ubuntu oven heats to different temperatures using different heating methods (standard vs. convection). For the same cake we’ll need to tell our friend which oven to use.

The recipe for our cake will be found in the Dockerfile.

If you’re asking how many people that cake can feed that’s a good question. It depends on how hungry our users are, or in other words how many slices (i.e., computational resources) they need. Optimization is CPU-intensive, so in our case each user will likely need a whole cake to themselves.

In a nutshell, Docker is really a way to make that identical software cake anytime, anywhere — guaranteed.

What’s Google Cloud Run?

Ok remember that software cake? What if we could make hundreds, or even thousands, of them on demand? Even better, what if those cakes could bake themselves… in seconds?!

This comes in handy when we’ve got a popular cake and everyone wants a piece. Unlike Marie-Antoinette we mean it when we say “let them eat cake.” If someone shows up they’re getting cake. For software if someone wants to invoke our optimization API we’re ready to optimize for them immediately.

For our purposes, this futuristic self-baking cake technology is Google Cloud Run. Rather than making our users buy their own self-baking machine (or in other words, updating their software and configuring their machine to our liking), we can use Google Cloud Run to fire up a virtual computer, hosted on a remote server, that has everything needed to prepare our cakes. As demand for our service increases we’ll need more cakes ready to go (i.e., spin up more virtual computers with our software dependencies and configurations to manage the increased demand/complexity of the problem). Cloud Run orchestrates all of this for us, as it is “fully managed.”

Just as we can turn on another computer as demand increases, we can also turn off a computer as demand decreases. We can even go to the extreme; Cloud Run autoscales to zero. If no one wants cake, no cake will be served. That means we pay nothing if there’s no demand for our software cake. Though truth be told, the free allotment from Google is generous.

Now that we’re on the same page about the technology let’s get going! Let’s build a free, scalable, open-source optimization API. All code and files shown throughout this article are available on Github.

Step 1: Example Optimization Model

Let’s begin with an optimization model we’d like to turn into an API. Note that this article is NOT intended to serve as a tutorial in mathematical optimization nor pyomo, but the model presented will be explained.

I don’t know about you, but when the pandemic is over I’m looking forward to petting puppies again. When it’s safe to do so, I need to make up for lost time. It’s been far too long since I’ve had the chance to pet a dog, so when the time comes I intend to maximize the puppiness. A good way to pup it up is on a walk. So here’s the question:

“What route allows me to pet the most dogs along the way?”

Let’s call this the “Puppy Petting Optimization Problem” (the PPOP), which frankly deserves more attention in the combinatorial optimization literature.

The PPOP is a great candidate for an optimization API because of the following:

  1. Frequency: People walk frequently, with different origins and destinations. Therefore, finding a path once isn’t helpful because the inputs are frequently changing. We’ll want to solve this problem numerous times.
  2. Dynamic Inputs: When visiting a different place, we still might want to pet as many dogs as possible when walking. Again, the inputs have changed.
  3. Small to Moderate Scale: A single instance of this problem isn’t massive scale. We’re not thinking about paths spanning thousands of miles, but a small area. Consequently, commercial solvers are likely unnecessary.

The Puppy Petting Optimization Problem, in words

Let’s represent our potential path options as a directed network with nodes (locations) and arcs (paths from one location to another). We start our jaunt from the source node s, and finish at the sink node t. We’ll refer to all nodes in between as intermediaries; we’re just passing through them.

Let’s assume we know how many dogs there are to pet at each location (e.g., a park), and that we have limited time. For example, we have 20 minutes to go from the apartment to the grocery store. Additionally, let’s assume that the time to pet each dog is negligible. Therefore, the time to pet does not contribute to the time spent on our jaunt. I’ll leave this extension as an exercise for the reader :)

Our goal is to find a single path from s to t that maximizes the total number of dogs we meet!

The Puppy Petting Optimization Problem: in Math

Our decision variables xᵢⱼ indicate whether to include an arc (i,j) in our path

Objective Function: Maximize the dogs we encounter throughout our path
Select a path: Constraints to ensure 1 contiguous path is chosen
Time Limit: The total time we have for our jaunt
Variables are binary

where:

  • xᵢⱼ = Whether to include arc (i,j) in our path, our decision-variables
  • tᵢⱼ = The time to traverse arc (i,j)
  • dⱼ = the number of dogs to pet at node j
  • T = the total time we have to complete our jaunt, our time limit

The Puppy Petting Optimization Problem: in Code

Here’s the model coded in Pyomo. For reference, this code is ultimately what you’ll see in the complete python script dog_max_path.py on Github.

First let’s import packages we’ll need:

Now let’s declare a model object and create our index sets:

Now our parameters:

Now our decision variables:

Now our objective function and constraints:

Now let’s populate with data and create a model instance:

Now let’s solve the problem. With Pyomo we can easily change the solver, but I’ve chosen CBC because it’s fast and supports multithreaded solves, while also being open source. Multithreading means CBC can solve sub-problems in parallel on each thread, greatly improving performance. For this model let’s use 2 threads.

Additionally, let’s set a timeout of 60 seconds to ensure our API returns a solution even if it isn’t optimal. This means our user gets a solution, and also prevents us from burning through Google Cloud Run’s free allotment.

Note: While you could swap out the solver, to follow along with this article it’s important to use CBC. The API we’re building is expecting CBC as a dependency (ingredient in our optimization software cake).

Finally, let’s write out the solution:

Putting it All Together

We can test our optimization model locally to ensure it runs as expected.

Do we HAVE to test? No. Is it a good idea? Yes.

If you’d like to test locally you’ll need to ensure your local machine has multiple software dependencies, outlined below. However, if you’re not familiar with installing packages on your local machine, or are running into issues doing so with any dependencies mentioned in this article, don’t worry. We can have Docker install these automatically later on! That’s the beauty of Docker.

If you don’t feel a need to test then you can skip the rest of this section and move onto the next section.

If you’d like to test locally then ensure your local machine has the following:

  1. Python 3
  2. Python packages you need: pyomo, pandas, os, pathlib
  3. The CBC solver. A compiled binary can be downloaded from here. If running the Pyomo model fails because the CBC executable cannot be located, set the executable argument in the SolverFactory() function to the path where the cbc binary resides:
    solver=SolverFactory('cbc',executable = 'C:\\ProgramFiles\\cbc\\bin\\cbc.exe')
    Note: The path uses double \\ on a Windows machine and / on Linux and Mac for the appropriate path.
  4. CSV files the model is expecting, which are available on Github. Make sure the CSV files are in the same directory as dog_max_path.py, the optimization model.

Now test:

  1. Open the terminal (cmd on Windows) and navigate to the directory containing the optimization model and CSV input files, using the cd command
  2. From the terminal run py dog_max_path.py if using Windows, python dog_max_path.py if using Mac or Linux
  3. Investigate terminal output. You should see the following: The model generates, CBC initializes, CBC finds a solution
  4. Confirm the solution CSV file pet_lots_of_dogs.csv was generated. The file should be written to the same directory as the dog_max_path.py python script. Inspect it to confirm arcs were selected (i.e., the selected_in_path column = 1) that form a contiguous path from origin to destination.

Step 2: Create a Plumber API

For this section, please make sure that you have the following installed on your machine:

  • R
  • RStudio, an IDE for R
  • R packages you need: plumber, purrr, dplyr, zip

Note: If you come across the following error when installing plumber, you may need to install a system package called libsodium:

ERROR: dependency ‘sodium’ is not available for package ‘plumber’

At this point we have a working optimization model. We can run it locally, but that’s it.

Now let’s turn this model into an API using the R plumber package. Beyond running the model, our API will do some post-processing of the solution as well.

Here’s the grand plan:

  1. Post-Processor: Write an R script to handle post-processing of the optimization solution. This makes the solution clearer for the consumer of our API.
  2. Create the API itself: Write a plumber.R file that converts our optimization model and post-processor into an API.
  3. Finishing Touches: Write an api.R file that tells the server what to do with your API.
  4. Test Locally: Test our API locally using RStudio to confirm it works.

Post-Processing Script

Before making our API, let’s get all its individual components working first. We just have one left.

Currently, our optimization model outputs a CSV file containing the solution. This file does not detail a path, nor its traversal time. Instead, the file merely details individual arcs and whether they’re included in our path or not.

This format isn’t abundantly helpful to the consumer of our API. It would be better if they received the literal path itself and the total time it will take to traverse. The post-processing script dog_path_analysis.R serves this purpose. The script’s inputs and outputs are as follows:

Inputs:

  • Node Information (location_info.csv), an input to the optimization model
    Name of the location (park, garden, hot dog stand, etc.)
    Node Type: Is the location an origin, destination, or somewhere in between (i.e., intermediary)?
    → The number of dogs we expect to meet and pet at that location
  • Arc Information (route_info.csv), an input to the optimization model
    → The arc: origin and destination locations (e.g., park to garden)
    Travel Time: How long it takes to travel along the arc, in whatever temporal units you choose provided you’re consistent throughout
  • The solution file output by the optimization model (pet_lots_of_dogs.csv)

Outputs:

  • The recommended path, displayed as a 1 column dataframe in a CSV file. Nodes are ordered top to bottom. For example, the first non-header row details the source node, the second non-header row details the second node in the path, etc.
  • The total traversal time of the recommended path, in a CSV file. This is a 1 column dataframe with 1 row.

Let’s build our post-processor, starting with reading in the inputs described above. For reference, this code is ultimately what you’ll see in the complete R script dog_path_analysis.R on Github.

First, set your working directory for R to the directory where the CSV files are located using setwd() . Now let’s get going with the code below.

Now we construct the path using the arcs selected by the optimization model, where the arc decision-variable == 1. Specifically, we’re interested in rows where the column selected_in_path == 1 .

We also know that by definition, our path must start at the origin (aka the source node). Consequently, the first arc in our path must start with the source node. Let’s filter to that arc.

Next, let’s put the remaining arcs in another dataframe for safekeeping.

Now we’re prepared to derive our path. The strategy is to traverse selected arcs in order, starting with the first arc. As we encounter a node, we’ll look for the arc that starts there.

At this point we have a list of nodes in order that defines our dog maximizing path. Let’s finish the script with the following steps:

  1. Map the number of dogs we expect to see at each node in our path, using the location input.
  2. Map the traversal time for each arc in our path, using the route input.
  3. Output 2 CSVs:
    → The path, with the expected dogs we’ll pet at each location
    → The total travel time associated with our path

Finally, we wrap all the code in dog_path_analysis.Rin a function that takes no arguments: dog_maximizer_post_processor(). By doing so we can manage this code in a separate R script from the API itself (next step). This makes code maintenance easier. Imagine if we had numerous functions. It would be complicated to have them all in one file!

Putting it all together, here’s the post-processing script dog_path_analysis.R . Note that library(dplyr) does not appear here because it will be run in a different file, shown in the next section.

In the next step we’ll simply run this script using the R source() function so our API has access to it.

Create the API Itself

At this point we have our constituent components, but they do not yet come together to form an API. This is where plumber comes in. Let’s create a function that does the following in an R script plumber.R :

  1. Reads input files
  2. Runs the optimization model
  3. Runs the post-processing steps
  4. Returns files that express the dog-maximizing path and its total traversal time

First, we’ll write an ordinary R function, nothing fancy.

Here. We. Go.

Setup

For reference, this code is ultimately what you’ll see in the complete R script plumber.R on Github. Let’s load the R packages we’ll need, and run our post-processor function so we can use it in our API:

Start with an Ordinary Function

To create a plumber API we need to write a function. But unlike other functions that are named upon declaration (e.g., f <- function() {} ), here we’ll just specify the function without a name, to start: function() {}

Handling File Inputs

You may have noticed that our optimization model has more than one parameter indexed on more than one index set, and therefore requires more than one input CSV file to create a model instance. Rather than worry about sending multiple files to the server, let’s simplify things by zipping the CSVs up into one zip file, dog_path_inputs.zip.

By the time we’ve invoked our API we can assume that the input files have already been compressed into a zip file. Consequently, our function is expecting a zip file z . But our optimization model is expecting CSVs, so our API function must first unzip, using the zip R package. Then we’ll delete the zip file as it’s no longer needed.

Run the Model and Return a Solution

Now let’s run the optimization model, create a directory woof_output for the solution files, and run the post-processor we defined earlier.

This is where the R and Python worlds collide! To leverage our Pyomo model we need to call Python from R. We can do this with a 1-liner; use the system() R function to run a shell command. In our case, let’s start Python and run our Pyomo script dog_max_path.py .

Note that R and Python are not sharing objects. Instead, the CSV output from one process is the CSV input for another, in sequence:

raw CSV inputs → model (Python) → model CSV outputs → post-processor (R) → ultimate CSV outputs

Finally, let’s copy the solution files to the woof_output directory, zip the files up, and return the resulting zip file.

Converting your Model into an API

Let’s turn this ordinary R function into an API. This takes just a few modifications and API concepts:

  1. Slight modifications to the function itself
  2. Decorators: Special code comment symbols
  3. HTTP Requests: How the function is called over the Internet
  4. API Endpoints

Adding API Arguments to our Function

Currently, our function takes 1 argument (a zip file z ). Since our API is a function invoked over the Internet, the zip file must be uploaded. To accommodate this, let’s replace z with the plumber convention req. The argument req stands for request , the jargon for making a function call to an API over the Internet. We’ll go into greater detail surrounding requests shortly.

On upload, the zip file must be parsed. Then, and only then, can the file be decompressed using zip::unzip() as before. The following code accomplishes this:

Before moving on, let’s add one more argument to our API function: res. This argument is not directly used in the function itself, but in the decorators that turn our function into an API described in the next section. Now we should have a function of two arguments: function(req, res).

Decorators: Special Code Comments to Configure the API

Regular functions can be invoked in a regular program, but not over the Internet. Decorators are special comments that allow functions to be invoked as an API. We’ll use decorators (#*) to configure our API functions.

Let’s start by giving our API an overall title and description, to appear in automatically generated docs. Put these lines of code after the call to source("./dog_path_analysis.R").

Now let’s add three decorators immediately preceding our function definition:

  1. Tell a consumer of our API what this specific API function does, in a free-form text description
  2. Equip this API function to return files
  3. Define the type of API function and where it is located on a website. For example, www.website.com/where

Let’s begin by giving a brief description of what our API does. This free form text description will show up in the API docs we’ll create later on.

#* Tell about what the API does. Solves the optimization problem

The following one line of code equips our API to return files of arbitrary type (CSV, zip, etc.) using what’s called a serializer . For certain use cases there are specific serializers, but for this API let’s go general.

#* @serializer contentType list(type="application/octet-stream")

The final decorator is pretty action packed, so it gets its own section.

How Will Users Interact with Your API?

Right now our API has the functionality we need, but our users won’t know where to find it. Specifically, our API needs its own address on the Internet (aka a url). From that address, our users can invoke our functions. You may have noticed that we didn’t give our function a name. We wrote function() rather than woof <- function() . This final decorator, defining what’s called an API endpoint, is where we assign a name to our function.

In our case our API has only one function. But what if it had another function that returned a random dog breed, for example? Our user would have no way to differentiate between the two. This is where API endpoints come into play.

If our API’s url is https://myfirstdogapi.com then we would could specify each function using / : https://myfirstdogapi.com/function1 and https://myfirstdogapi.com/function2. Note that function1 and /function2 are API endpoints.

Let’s call our PPOP solving function mathadelic_woof . That is to say, our decorator comment will use the name mathadelic_woof and this function’s web address will be: https://something/mathadelic_woof, where https://something is where our API is hosted.

With this in mind, let’s jot down:

#* /mathadelic_woof

This decorator needs just one more thing! Our API is a software cake that comes in different flavors called http methods. These methods determine how users interact with the API function. Let’s talk about two specific methods:

  • GET: Use this method when requesting data from the API endpoint. For example, invoking a function that returns a value or a file.
  • POST: Use this method when sending data to an API endpoint. For example, uploading a file when invoking a function.

If an API endpoint both sends and returns data, go with a POST request.

Our API indeed requires the user to send data and also returns data, so we’ll configure our endpoint mathadelic_woof to be a POST request. Our third decorator is therefore:

#* @post /mathadelic_woof

Taken together, the three decorators immediately preceding our API function are as follows:

#* Tell about what the API does#* @serializer contentType list(type="application/octet-stream")#* @post /mathadelic_woof

Our entire plumber.R file is here.

Tell the Server what to do with the API

Our API is defined in plumber.R . Now we need a few more lines of code, in a separate file called api.R to tell the server what to do with it. Here we’ll define a host and port .

The host for our API is the domain name of the url before the / for our endpoints (also called the “base url”), such as https://something. The port is a security attribute used for validation within our API. When making an API request, a user must pass the same port number we have instructed the server to listen on. If they pass the wrong port number the request will be rejected.

Finally, api.R will use an argument swagger = TRUE to create a nifty doc (called swagger) that provides instructions to our API. The entirety of the file is below; it’s just a few lines:

The plumb() function turns plumber.R into an API, and r$run() sets the attributes described above. Port 8080 is the standard convention, and the default on Google Cloud Run. Host 0.0.0.0 allows our API to adopt whatever host url Google Cloud Run assigns upon deployment, rather than setting a host explicitly.

Step 3: Test the API Locally

Run the API

We have all the files we need to test our API locally, outside of Docker. As before, this step is not strictly necessary, but it’s a good idea. Let’s confirm everything works before adding the complexity of Docker into the mix.

Files: You should have the following files together in one directory. These files comprise our optimization API

  • api.R
  • plumber.R
  • dog_path_analysis.R
  • dog_path_max.py

Software: You should have the following software and packages installed:

  • R
  • Python
  • CBC solver. A compiled binary must reside on your computer. If running the Pyomo model fails because the CBC executable cannot be located, set the executable argument in the SolverFactory() function to the path where the cbc binary resides:
    solver = SolverFactory('cbc', executable = ‘C:\\Program Files\\cbc\\bin\\cbc.exe’)
    Note: The path uses double \\ on a Windows machine and / on Linux and Mac for the appropriate path.
  • R packages: plumber , purrr , dplyr , zip
  • Python packages: pyomo , os , pandas , pathlib

OS specific python command: In your plumber.R file, for a Mac or Linux machine the line using the system() command should read:

system(command = "python3 dog_max_path.py", wait = TRUE)

or

system(command = "py dog_max_path.py", wait = TRUE)

on a Windows machine. Note that this operating system dependency disappears when we get to Docker; get excited!

To test this API locally we’re going to both host and invoke it on the same machine. We’ll do each in a separate RStudio window.

First let’s open an RStudio window to host the API. In this window open api.R . Remember to set your working directory to the location where the API files are located using setwd("your/directory/here") .

Select all the code in api.R and run it:

The RStudio console shows our API is running and that the swagger docs have been created:

We also see another window open automatically with our swagger docs, telling us about our API! Let’s take a look.

This document serves as a tutorial to a consumer of our API. These docs tell us the following:

  • A title and description, matching what we typed in earlier
  • The url where the API can be accessed http://127.0.0.1:8080/
  • Our API endpoint /mathadelic_woof
  • Detailing /mathadelic_woof as a POST API request method

Note that 127.0.0.1 designates that the API is running on localhost , your machine.

More information on swagger docs can be found here. Our API is running and looking good.

Before we test our API, let’s make sure we have an input zip file at the ready as we’re going to upload data (albeit locally). I have called my input zip file dog_path_inputs.zip. This file can be downloaded from Github.

Invoke the API

Now that our API is running and we have the input zip file ready let’s open another RStudio window to invoke our API. Remember to change your working directory to where the input zip file is located using setwd().

We’re going to use the httr R package to test our API, but any programming language that makes API requests would work. With a few lines of code we can invoke our API in an additional RStudio session:

What’s this code snippet doing?

  1. Loads the httr R package, which has functions to invoke API requests
  2. Uses the POST() function from httr, since our API endpoint /mathadelic_woof is of type POST
  3. Provides a url for the API:
    Main url: http://127.0.0.1
    Port Number: 8080
    API endpoint: /mathadelic_woof
  4. Provides a body of the request (i.e., any input parameters). In our case we need to upload a file
  5. Writes the file returned by the API to the local machine’s filesystem, gives that file a name (woof_output.zip), and overwrites a local file in the same directory with the same name if one exists

What happened when we ran the code snippet? Nothing in the RStudio window where we ran the code, but a lot in the RStudio window hosting our API:

There’s even more console output above, but the takeaway is that we see the optimization model ran, and found a solution in 0.02 seconds! We should also see the output file woof_output.zip in the same directory as dog_path_inputs.zip.

Before going on to the next section, ensure the following:

  • The executable argument of SolverFactory() is removed from the optimization model file dog_max_path.py. A hardcoded path to the CBC solver would be problematic and unnecessary when running in Docker.
  • The system() function in plumber.R should read:
    system(command = “python3 dog_max_path.py”, wait = TRUE)

These notes are important if you’re running on a Windows machine, as Docker is Linux-based.

Step 4: Dockerize the API

Now that we’ve set up our Plumber API, let’s bake some software cakes using Docker. First, start by installing Docker. You will be prompted to make a Dockerhub account. This is both necessary and free. If you’re using windows, first install the Windows Subsystem for Linux, as Docker is Linux based.

Once you have Docker installed on your machine, every step will be the same regardless of your operating system. That’s the beauty of Docker! In effect, by running Docker we’re all running Linux.

Warning: If you have a firewall enabled, such as the McAfee antivirus firewall, it may prevent Docker commands from working.

With Docker running we can get to our software cake recipe, a Dockerfile. Note that this file MUST be called “Dockerfile,” with no extension. This is where we list our software dependencies and what to do with them. These dependencies include programs (e.g., R, Python, CBC), packages (e.g., Pyomo, Plumber), and files (e.g., our Pyomo model file).

For reference, this code is ultimately what you’ll see in Dockerfile on Github.

Our Dockerfile will start as all Dockerfiles do, with a FROM instruction. The idea behind FROM is that we “stand on the shoulders of giants who came before us.” In a one liner, we can run all the commands from another Dockerfile that’s already hosted on Dockerhub (or other Docker repositories). In our case we’ll start with:

FROM openanalytics/r-base

openanalytics/r-base is called our parent image, because rather than starting from scratch, we can pick up where that image left off. A parent image is our starting point.

It’s as if we typed everything here in-line, installing all dependencies in that Dockerfile. In other words, we can think of openanalytics/r-base as the first layer of our cake. Every cake needs at least one layer, and this one line ensures we have the base layer of our cake ready to pop in the Ubuntu convection oven with R as its main ingredient.

For those curious, the reason I did not use the official plumber Dockerfile is because at the time of this writing that Dockerfile uses Debian, which was harder for me to work with compared to Ubuntu.

FROM here, we’ll now add specific instructions to bake more layers for our cake. In a nutshell, here’s what our Dockerfile recipe will do:

  1. Install software:
    → Python
    → git: We’ll need git to install CBC by cloning a Github repository
    → libssl: Necessary for a plumber API
    → libcurl: Necessary for a plumber API
    → libsodium: Necessary for a plumber API
    → CBC
    Note: R is already installed with the parent image openanalytics/r-base, so there is no need to install it
  2. Install R and Python packages
    Python Packages: pyomo, pandas, pathlib
    R Packages: purrr, dplyr, zip, plumber
  3. Copy files: dog_path_analysis.R, api.R, plumber.R, dog_path_max.py
  4. Run a command to get it all working

Let’s get going on our Dockerfile.

The Dockerfile

This section adds to the Dockerfile we started in the previous section with the FROM instruction. Docker is Linux-based and by default runs as the sudo user, so we needn’t worry about defining users. To install software we need to apt-get things. Let the apt-getting begin:

The Docker instruction RUN says “run these Linux commands.” apt-get-update and apt-get get us the latest and greatest versions of what we’re installing, along with the latest and greatest dependencies, unless otherwise stated. The -y is answering “yes” when in a Linux terminal situation we would be prompted to confirm the command.

ADD our list of Python packages (with specified versions) requirements.txt into a folder Docker knows about. We specify versions of Python packages to ensure the versions we’ve tested are the versions Docker will use. For example, we have specified Pyomo==5.7.1 because Pyomo version 5.7.2 might make our cake taste slightly different. Note that Pyomo must be spelled with a capital P.

Our next RUN instruction installs the packages listed in requirements.txt

Now let’s install and compile the CBC solver. This code is based on this documentation. I looked at various Docker and Linux installation commands and no one code snippet worked for me in full. But by tinkering with various commands as my starting point I came up with the following, which works at the time of this writing:

This code snippet does the following:

  1. Retrieves CBC from a Github repository
  2. Sets the working directory to /var/cbc
  3. Uses an installer provided by COIN-OR (the creators of CBC) called coinbrew to install CBC version 2.10.5
  4. Builds CBC, enabling parallel optimization solves (multiple CPUs) via --enable-cbc-parallel
  5. Sets the environment variables COIN_INSTALL_DIR and PATH. These environment variables are necessary so Pyomo can locate CBC without an executable argument to the SolverFactory() function.

Now let’s copy files from our local machine to Docker:

Let’s install our R packages:

Finally, let’s open up Port 8080 (where we told our plumber API to listen) and tell any cake to come out of the oven running api.R

The ENTRYPOINT instruction tells a machine running our Docker instructions to run a command upon startup. In our case we’d like the machine to run an R script, specifically api.R.

The entire Dockerfile is here.

Building the Docker Image

The Dockerfile is the hard part. Now we can bake some software cakes with a Docker build command.

To run the Docker build command, make sure you have the following files in the same directory, where docker build will be run from your terminal:

  • Dockerfile
  • dog_max_path.py
  • plumber.R
  • api.R
  • dog_path_analysis.R
  • requirements.txt

Let’s open Docker. You’ll know it’s running when you see the whale icon.

Now open powershell if using Windows, and terminal if on Mac or Linux. This is where we’ll run our Docker commands.

Now navigate to the directory where our API files reside using cd.

Now run the following Docker command to build a Docker image, aka a software cake:

docker build -t dog-max-path .

Notice the dot at the end of the command. It’s important!

This command will take awhile to run, around 30–45 minutes. Patience is a virtue.

What’s happening is we’re running ALL the commands in our Dockerfile. Note that the -t dog-max-path tags the image (names the cake) so we can use it later. Installing software takes time. We can see we’re off to the races.

Once the command is finished installing all software, you should see a line that says FINISHED.

If anything went wrong during the docker build process, you can run docker rmi <image_name> -f to remove the image with name <image_name> and rerun the build. If all goes well DO NOT run that command, as that would remove the image you just built.

Running the Docker Image

Building the image means we can now run it. In other words, our API can run. We can accomplish this with a 1 liner at the command line:

docker run dog-max-path

Note that this command DOES NOT have a dot at the end. docker run runs our API in Docker locally. We can see it’s running here:

We see that the API is running. If on Mac or Linux, you could navigate to the url provided to see the Swagger docs we saw earlier in RStudio. On a Windows machine it is currently not possible to access something running in Docker locally.

Now that we know our API runs in Docker let’s get it deployed! In this article I deploy to Google Cloud Run, but with Docker you could deploy anywhere (e.g., AWS, Azure, DigitalOcean, etc.) with zero code changes. In other words, if you wanted to deploy your API to something like AWS instead, the steps described in this article would have been identical right up to and including this point. You’re not locked in to anything. This is yet another reason Docker is an attractive deployment strategy.

Without further ado, let’s deploy.

Step 5: Deploy to Google Cloud Run

Set up a Google Cloud Platform Account

To deploy our optimization API to Google Cloud Run we first need a GCP (Google Cloud Platform) account and Google project. To create an account, log into the Gmail account of your choice, then click here.

Next, enable billing. For the small-scale API we’re building you shouldn’t incur any charges, but this is required to use GCP.

Next, create a GCP project. Name it whatever you like, but remember the name (aka project id). I’ll call mine woof-optimization.

Finally, let’s enable the Google Cloud Run API, which is necessary to use Google Cloud Run. An excellent guide is provided here, but I also provide one step-by-step below for completeness. For the linked guide only the Cloud Run API is necessary.

To do this, go to the Google Cloud Console (search “Google Cloud Console” in Google and click the first hit). From here go to APIs and Services → Dashboard:

Now click “Enable APIs and Services” as shown below:

Type “cloud run” into the search bar; the Cloud Run API will show:

Click on “Cloud Run API” as pictured above, and enable.

Push your Docker Image to the Google Container Registry

Configure Docker

We have a GCP project set up and a docker image (dog-max-path) built locally. Now we need to get that image onto GCP so we can use it. Specifically, we need to get that image onto the Google Container Registry, where Docker images for your GCP project are stored.

When I was personally figuring out how to do this I found the official documentation incomplete; I will close that gap in this article. The critical step not mentioned in the documentation at the time of this writing is configuring Docker on your local machine to communicate with and authenticate into your GCP project. Without this step any attempt to push our image onto the Google Container Registry will fail.

First, let’s download the gcloud sdkhere. Note that the sdk requires Python to be installed on your machine, if you don’t already have it installed.

Now open your terminal, or cmd on windows, to authenticate into your GCP account using this command:

gcloud auth login

Now in your same command line session install the Docker credential helper using this command:

gcloud components install docker-credential-gcr

Expect the output from the above command to look as follows:

Now, configure Docker:

configure-docker

Push to the Google Container Registry

Now, and only now, can we push an image to the Google Container Registry. Note that the Google Container Registry is specific to your project; it is NOT public. Open your terminal, or powershell on Windows, and ensure Docker is running locally.

Now let’s tag our Docker image with our GCP project id (woof-optimization in my case):

docker tag dog-max-path us.gcr.io/woof-optimization/dog-max-path

Note that you can change the us.gcr.io part of the above command, depending on where you would like your image to be hosted. I will use us.gcr.io to host my image in the United States, but replace that with the appropriate value in the next command if you have changed this option.

Let’s push our image to the Google Container Registry:

docker push us.gcr.io/woof-optimization/dog-max-path

Note that the latest version of your image is pushed by default. If you push the image tag multiple times you will see each version appear on the Google Container Registry.

A technical note on costs: Keeping images on the Google Container Registry costs literal pennies per month. To avoid recurring costs, you can remove images from your container registry at any time by going here and clicking the Go to Console button. Then you would click the containing folder, select the image, and click the Delete button.

Create a Service on Cloud Run

We’re done writing commands; we’re a few clicks away from an optimization API!

Our Docker image is available on the Google Container Registry, which means Cloud Run has access to it. Go to the Cloud Run console (I can never find it, but just type “Google Cloud Run” into Google and click the first hit).

Now we’ll use the Graphical User Interface (GUI) shown to deploy our optimization API. Under “Service Settings” select “Cloud Run (Fully Managed),” which is selected by default. Next, select a region hosting the servers running the API. I selected us-east1 because that region is closest to me.

Give your service a name. I’m calling mine “woofington.” Click “Next.”

Next, select a Docker image to deploy. All that work configuring Docker and pushing an image was for this moment! By default, “deploy one revision from an existing container image” is selected; leave it selected. Click “Select.” This will show any Docker images you’ve pushed to the Google Container Registry.

Click on us.gcr.io/woof-optimization/dog-max-path and click latest. This selects the latest revision of the Docker image we pushed. Now click the “Select” button.

Now click “Advanced Settings.” The container port should be set to 8080 to match the port we set when creating our plumber API in our api.R file. Leave all other Settings in the General section as they are (including blank if they default to blank). Scroll down.

Let’s set the memory and CPU specs for our optimization API under “Capacity.” These settings set the memory and CPU allocation for each instance (remote server) running our API. Our API isn’t that memory intensive, so I set memory to 1 GiB (why is memory expressed in GiB rather than GB? No idea).

In contrast to memory, optimization is CPU-intensive. Additionally, we set the threads option for CBC to 2 in our Pyomo model. That means when the model runs CBC will use 2 threads. Consequently, let’s set CPU Allocated to 2 for consistency with our Pyomo model. At the time of this writing we can set CPUs up to 4.

Set the request timeout as you like. I left the default of 300 seconds, meaning if the API call does not complete in 300 seconds then the instance will shut down. This prevents long API calls from hogging resources and costing money.

Set the Maximum Requests per Container to 1. This is important, to ensure that each API request, aka each person who needs to solve the Puppy Petting Optimization Problem, gets the full 2 CPUs we set above. Otherwise, it would be possible for multiple people to be using the same machine and not getting the full 2 CPUs for parallel mixed-integer programming solving.

It’s imperative to dedicate each request to its own container because unlike prediction APIs I’ve come across, an optimization API has to handle computational intensity at run-time. Specifically, our API has to both construct and solve a model each time the API is invoked. This takes considerably more CPU and memory resources than a prediction using a model already fit prior to the API’s existence.

Under Autoscaling, set Maximum number of instances to be <= 1000. It’s important to leave the minimum at 0 to ensure you aren’t billed (nor burning through free Cloud Run allowance) when no API requests are coming in. In other words, we only want to self-bake and serve cakes when people order a cake! The max defaults to 100, which is fine for me. If you’re expecting a lot of traffic then you can increase that number to 1,000.

It’s worth discussing autoscaling for a minute. It’s pretty amazing. Cloud Run turns machines on and off automatically; we don’t need to worry about it at all! This incredible feature means we only incur costs or free credits when our optimization API is being used. Otherwise, servers turn off. Since we set the maximum request per container to 1, each API request implies the following:

  1. A new machine will turn on to fulfill that API request. In other words, each person showing up wanting some Puppy Petting Optimization cake gets their own full cake (i.e., remote server).
  2. That server will be on for a maximum of 5 minutes (or whatever timeout you set).
  3. We could serve up to 1,000 cakes at the same time! 😱

Click the “Next” button.

The ingress setting controls what traffic is permitted to connect to your API. Specifically, what IP addresses will we allow to connect? Let’s keep it simple for now and allow all traffic. (We instead could whitelist certain IP addresses)

The Authentication setting determines whether our optimization API is private or public. Put another way, can anyone use our API or does a user have to authenticate with credentials of some kind? “Allow unauthenticated invocations” means anyone can access our API if they know its url, whereas “require authentication” means credentials are required to access our API.

Let’s make our API private by selecting the “require authentication” option. This requires a bit more setup, but the private internal use is worth it. Requiring authentication means our optimization API truly remains ours for personal or even commercial use.

Now click the “Create” button. Wait for it…

Our API is online!!!

That green checkbox next to “Woofington” means our API is up and running. We can look at logs by clicking “Logs.”

We can also see the url for our optimization API. We’ll need this in order to connect to it.

You’ll notice that the url begins with https. This is awesome. It means that out of the box Cloud Run made our API secure using https protocol. Had we not deployed on Cloud Run, but instead using some other platform, we would have to set this up ourselves since plumber defaults to http.

Connect to your Private API

The steps described in this section are unnecessary in the case of a public API (i.e., if you had selected “Allow unauthenticated invocations” in the previous section). To connect to a public API all we need to run is code like the following:

If you created a private service, as I have in the section above, read on. Setup requires multiple steps that must be completed just once.

Warning: The setup steps here work at the time of this writing, but it’s possible these steps could change in the future. This setup depends on GCP as well as the googleCloudRunner R package.

If our API is private, we need a way to authenticate. We’ll use the googleCloudRunner R package to create jwt’s (json web tokens), allowing us to connect. The package was created by Mark Edmondson, and his blogging about R on GCP is how I discovered Google Cloud Run in the first place!

To authenticate, we’ll need to create an authentication json file. The steps outlined below are based on the EXCELLENT guide provided by Mark, which includes screenshots as well as a tutorial video. I recommend following Mark’s guide (linked just above), but will provide steps here so this article is self-contained. Thanks to Mark for putting this together. There is a complete guide to set up the package provided on the googleCloudRunner website, with a video to follow along. These steps are outlined in screenshots here and from 2:52–12:21 in Mark’s video guide.

Note that all this happens within our Google Cloud Project, and does not require R to be open at all.

  1. Go to the project you created for the optimization API (e.g., mine was called woof-optimization). This is located in your Google Cloud console.
  2. Go to APIs and Services from the sidebar (3 horizontal lines)
  3. Click Credentials. Our goal is to create an OAuth 2.0 Client ID.
  4. Click the Create Credentials ⇒ OAuth Client ID
  5. Click Configure Consent Screen
  6. Under User Type I would recommend clicking External, because then users outside your G-Suite organization can authenticate into and invoke your API
  7. Click the Create button
  8. Fill out the form shown, entering a name and contact email (an email connected to the GCP project). Though as Mark says, this information will not be directly used. This is a step on the way to generating an auth.json file.
  9. Click Save and Continue
  10. Click Add or Remove Scopes
  11. Checkbox /auth/cloud-platform. At the time of this writing the options all mention BigQuery; this is a bug in the menu shown. Don’t worry.
  12. Click the Update button.
  13. Add users. I recommend you add your own Google account that’s connected to the GCP project.
  14. Since we’ve set up the OAuth in previous steps, we can now create credentials. On the sidebar click Credentials.
  15. Click the Create Credentials ⇒ OAuth Client ID
  16. Under Application Type select Desktop App
  17. Type in a name
  18. Click the Create button. DO NOT SHARE OR PUBLISH THIS INFORMATION!
  19. Now we have an OAuth Client ID, and want to download a json file with this information. Click on the download button with the downward arrow icon.

This action downloads a json file. The file will have a long character string before the .json extension. Rename this to something like googlecloudrunner.json.

20. We need to enable a few more APIs on our Google Console. Make sure you’re logged into the Gmail account linked to your GCP project. Now go to the following urls and click Enable API:
https://console.developers.google.com/apis/api/iam.googleapis.com/overview
https://console.developers.google.com/apis/api/cloudresourcemanager.googleapis.com/overview
https://console.cloud.google.com/apis/api/cloudbuild.googleapis.com/overview
https://console.developers.google.com/apis/api/run.googleapis.com/overview (this is the Cloud Run API, which should already be enabled. But if it isn’t, enable it now)
An example screenshot is provided below to show enabling one of these APIs :

With googlecloudrunner.json generated and downloaded locally, return to your local machine and open RStudio. Install the googleCloudRunner R package by running the following command in the R console:

install.packages("googleCloudRunner")

Note: if you have issues installing googleCloudRunner, you may need to install a system dependency called libgit2. You can install this via homebrew on a Mac using the following command:

brew install libgit2

Now that googleCloudRunner is installed, let’s use it to set the following environment variables in our .Renviron file using the cr_setup() function:

  • GCE_DEFAULT_PROJECT_ID
  • CR_REGION
  • GAR_CLIENT_JSON
  • GCE_AUTH_FILE

Before moving on, have your GCP project id (the name in your GCP console) and region handy; you will need these during setup.

Load googleCloudRunner using library(googleCloudRunner) in your R session. Now run the function cr_setup(), which triggers a setup wizard. Options throughout the wizard are numbered, but the numbers sometimes change. Consequently, I will refer to options by name rather than number.

The setup wizard triggered by the cr_setup() function should look like the following:

Note that running cr_setup() will configure environment variables stored in a .Renviron file. Anytime .Renviron is updated, we’ll need to restart R. This is why you will have to restart R, run library(googleCloudRunner), and run cr_setup() more than once throughout this setup process.

First, we’ll configure our GCP Project Id. This is the name of your project in your GCP Console at the top of the blue bar of the webpage. In my case this is woof-optimization. The setup wizard will now update your .Renviron file automatically with the environment variable GCE_DEFAULT_PROJECT_ID. With the .Renviron file updated we need to restart R, load the package using library(googleCloudRunner), and run the cr_setup() function again.

Next, let’s configure our Cloud Run Region (i.e., the environment variable CR_REGION). Select the corresponding option from the setup wizard. Your region is whatever you set for your Cloud Run API. If you forgot, go to your Cloud Run console and look at your service. Your region is in the Region column. My region is us-east1 ; yours could be different. As before, restart R, reload the package, and run the cr_setup() function.

Finally, let’s generate our authentication json file. For this to work, you must have a client ID json file downloaded, which we completed in the 20 step GCP process above. In my case, my client ID file is googlecloudrunner.json, generated in step (19) above.

In the setup wizard, select the option to Configure Authentication JSON file. First, you will be prompted to browse and select your client ID JSON file.

Next, you will be prompted to allow access to your Google account associated with your Google Cloud Run project in your web browser. Allow access. This will generate an authentication JSON file. I named mine googlecloudrunner-auth-key.json. Restarting R, loading googleCloudRunner , running the setup wizard again, and selecting the Configure Authentication JSON file option again should prompt you to navigate to your authentication json file, thereby setting the path to that file in your .Renviron automatically.

To test that the setup worked, restart R so you’re in a new session and load the package using library(googleCloudRunner). Upon loading the package the console should show a message confirming successful authentication.

Hurray; we’re auto-authenticated upon loading googleCloudRunner!

This means you have a .Renviron file ready to go. If you don’t auto-authenticate, you can manually update your .Renviron to look something with the following:

GCE_AUTH_FILE=/Users/me/auth/googlecloudrunner-auth-key.jsonGCE_DEFAULT_PROJECT_ID=woof-optimizationCR_REGION=us-east1

Note: If your .Renviron file looks correct, but you’re still not automatically authenticated upon loading the package, then it’s possible R isn’t reading the .Renviron file you updated. If this is the case, tell R to read the .Renviron file explicitly using the readRenviron() function with the path to your .Renviron file before running library(googleCloudRunner).

Whew, that took quite a bit of setup! What’s the payoff?

  • Our optimization API is private: Knowing the url isn’t enough to invoke it. This means we can use this API in both personal and commercial contexts.
  • One Time Setup: We can build as many private Cloud Run APIs as we like for the same GCP project, with no further setup or configuration.

Step 6: Invoke Your API

Now let’s run a test to invoke our deployed API. We’ll use the httr and googleCloudRunner packages. If httr is not installed, run install.packages("httr") before moving on.

Open RStudio in a fresh R session and run the following:

library(httr)library(googleCloudRunner)

To connect to our optimization API we’ll need the following:

  1. The url and API endpoint
  2. A jwt (JSON web token)

To find your API’s url look at the service on Cloud Run. The port must match the one we set when creating our API, 8080.

By authenticating upon loading the googleCloudRunner package, we can use the cr_jwt_create() and cr_jwt_token() functions to create a jwt. We’ll use these functions in tandem with the POST() function from httr we saw before.

Before invoking our API we need a zip file to upload with our POST request. I’ll use the same one as before, when I tested locally outside of Docker. You can find this zip file on Github. In R set your working directory to where the zip file is located using setwd().

Now run the following code snippet:

Note: If you get errors running the above code it’s possible that quotation marks copied over in an unworkable format to RStudio. If this is the case, replace them with quotation marks typed in RStudio itself.

This code snippet does the following:

  1. Makes a POST request
  2. Specifies the url to our API, including the endpoint /mathadelic_woof
  3. Provides the path to the zip file we’re uploading using the body argument
  4. Names the solution zip file returned by the API woof_output.zip, and writes that file to local disk. A file with the same name will be overwritten using the overwrite = TRUE argument

When we run it what happens? We can check if the API request succeeded by running woof_test$status_code. A status code of 200 means it worked, whereas 500 indicates failure, and 404 indicates that the url could not be found.

We see a status code of 200 💥

You can also check the directory where the uploaded zip file is located and you’ll discover woof_output.zip. It worked.

Now that’s mathadelic!

Last Thoughts

If you’ve made it this far, let’s reflect on what we’ve accomplished. We’ve built an optimization API from scratch, picking up Plumber, Docker, and Google Cloud Run along the way. That means our optimization model can be made accessible to both users and applications. Now what began as an optimization model on one computer can be available to thousands all at once!

This means users or applications could leverage the model for its results directly, or work the model into subsequent workflows. For example, maybe we’d like to visualize our maximum puppy petting walking path in another application that has a maps feature.

I crafted this approach when I wanted to productize my mixed integer programming models. Any feedback on both how this approach could be improved or extended, as well as how readers benefited would be greatly appreciated. If you’ve built a nifty optimization API using this article I want to hear about it!

Major thanks to Ryan Park for all his help both editing this article and testing the code!

Dockerize. Optimize. Bake software cakes!

--

--