3 min read

Deploying R REST API in production

Back in 2018, I had to deploy an R API on kubernetes. It lead to a quick presentation at R à Québec 2019 which is still available here. Since then, I deployed half a dozen more solutions while refining the deployment pipeline. In my opinion, R is a vary viable solution as the stack stays the same through the whole data science cycle. This is more or less the workflow I use.

Organize code

The first thing I do is encapsulate feature engineering, prediction code and artifacts in an R package. Compared to regular scripts, packages offer better dependencies management and better unit test integration with testthat. Plus they can be reused by other component more easily.

Exposing R

The target is to interact with R via a REST API. Several packages provide this functionality notably plumber, OpenCPU and RestRserve.

RestRserve has a disclaimer about being mostly tested on UNIX and OpenCPU does not offer the same flexibility in defining API endpoints. plumber is integrated in RStudio, has a nice decoration syntax a la roxygen2 and works on all platforms. I think all of them could work, plumber integration is the easiest1.

All you need is something like this.

#* @post /whatever/you/want
function(input) {
  mypackage::predict(input)
}

or

f <- function(input) {mypackage::predict(input)}
pr <- pr() %>% pr_post("/whatever/you/want", f)
pr$run(host = "0.0.0.0", port = 8004)

Container setup

Containers are built from a shared base image2 to minimize compile time. I initially size the deployment pods with 1.5x time the memory consumption of a local R process running the same API. For CPU, since it works a bit differently, I set limit equivalent to max 1 seconds execution. API consumers timeout after that anyway. Any kind of multithreading is disabled has it does not offer a lot of benefits in this context3.

Documentation

Every API is deployed with its openapi.json file generated by plumber. I further customize the specification to fully describe input parameters. I also add custom hooks for debug mode and logging.

library(plumber)

pr <- plumb()
postroute = function(req) {
  cat("[", req$REQUEST_METHOD, req$PATH_INFO, "] - REQUEST - ", req$postBody, "\n", sep = "")
}

postserializewithoutpayload <- function(req, res) {
  cat("[", req$REQUEST_METHOD, req$PATH_INFO, "] - RESPONSE - ", res$status, "\n", sep = "")
}

postserializewithpayload <- function(req, res) {
  cat("[", req$REQUEST_METHOD, req$PATH_INFO, "] - RESPONSE - ", res$status, " - BODY - ", res$body, "\n", sep = "")
}

hooklist <- list(postserialize = postserializewithoutpayload)
debughooklist <- list(postserialize = postserializewithpayload, postroute = postroute)

if (Sys.getenv("DBG_ENABLE", FALSE) == TRUE) {
  pr$setDebug(TRUE)
  pr$registerHooks(debughooklist)
} else {
  pr$registerHooks(hooklist)
}

pr$run(host = "0.0.0.0", port = 8004)

Review

I have been using this setup for three years now. Periodically, I use load tests4 and review usage metrics to adjust pods configuration. One thing I don’t particularly like is the container size and memory usage. There are strategies to reduce that like stripping compiled code. Maybe something to explore.

Comparable solutions used in the insurance industry are pricier, more bulky and often come as a black box. With this, everything is fully exposed, I know what the code does and it is a lot cheaper to run. When something has to be fixed, you can do a pull request or fork a repo. No more waiting for some features that you have been promised for 3 years from the vendor.

Sounds like a win to me.


  1. I became a regular contributor after using it.↩︎

  2. The shared image is built from rocker/r-base:latestrstudio/r-base to leverage precompiled linux binairies from RStudio public package manager .↩︎

  3. For our particular workloads, mostly data.table manipulation and xgboost models with some geo transformation from sf, we set OMP_NUM_THREADS to 1. Forking had a 4x average response time performance cost.↩︎

  4. loadtest↩︎