drake

An R-focused pipeline toolkit for reproducibility and high-performance computing

Github stars Tracking Chart

The drake R package

Data analysis can be slow. A round of scientific computation can take
several minutes, hours, or even days to complete. After it finishes, if
you update your code or data, your hard-earned results may no longer be
valid. How much of that valuable output can you keep, and how much do
you need to update? How much runtime must you endure all over again?

For projects in R, the drake package can help. It analyzes your
workflow
, skips steps with
up-to-date results, and orchestrates the rest with optional distributed
computing
. At the end,
drake provides evidence that your results match the underlying code
and data, which increases your ability to trust your research.

Videos

Visit the first page of the manual
to watch a 6-minute introduction.

The rOpenSci Community Call
from 2019-09-24
is a much
longer presentation on drake (20 min talk, 35 min Q&A). Visit the
call’s page for links to
additional resources, and chime in
here
to propose and
vote for ideas for new Community Call topics and speakers.

What gets done stays done.

Too many data science projects follow a Sisyphean
loop
:

  1. Launch the code.
  2. Wait while it runs.
  3. Discover an issue.
  4. Rerun from scratch.

For projects with long runtimes, people tend to get stuck.

But with drake, you can automatically

  1. Launch the parts that changed since last time.
  2. Skip the rest.

How it works

To set up a project, load your packages,

library(drake)
library(dplyr)
library(ggplot2)

load your custom functions,

create_plot <- function(data) {
  ggplot(data, aes(x = Petal.Width, fill = Species)) +
    geom_histogram()
}

check any supporting files (optional),

# Get the files with drake_example("main").
file.exists("raw_data.xlsx")
#> [1] TRUE
file.exists("report.Rmd")
#> [1] TRUE

and plan what you are going to do.

plan <- drake_plan(
  raw_data = readxl::read_excel(file_in("raw_data.xlsx")),
  data = raw_data %>%
    mutate(Species = forcats::fct_inorder(Species)),
  hist = create_plot(data),
  fit = lm(Sepal.Width ~ Petal.Width + Species, data),
  report = rmarkdown::render(
    knitr_in("report.Rmd"),
    output_file = file_out("report.html"),
    quiet = TRUE
  )
)
plan
#> # A tibble: 5 x 2
#>   target   command                                                              
#>   <chr>    <expr>                                                               
#> 1 raw_data readxl::read_excel(file_in("raw_data.xlsx"))                        …
#> 2 data     raw_data %>% mutate(Species = forcats::fct_inorder(Species))        …
#> 3 hist     create_plot(data)                                                   …
#> 4 fit      lm(Sepal.Width ~ Petal.Width + Species, data)                       …
#> 5 report   rmarkdown::render(knitr_in("report.Rmd"), output_file = file_out("re…

So far, we have just been setting the stage. Use make() to do the real
work. Targets are built in the correct order regardless of the row order
of plan.

make(plan)
#> target raw_data
#> target data
#> target fit
#> target hist
#> target report

Except for files like report.html, your output is stored in a hidden
.drake/ folder. Reading it back is easy.

readd(data) # See also loadd().
#> # A tibble: 150 x 5
#>    Sepal.Length Sepal.Width Petal.Length Petal.Width Species
#>           <dbl>       <dbl>        <dbl>       <dbl> <fct>  
#>  1          5.1         3.5          1.4         0.2 setosa 
#>  2          4.9         3            1.4         0.2 setosa 
#>  3          4.7         3.2          1.3         0.2 setosa 
#>  4          4.6         3.1          1.5         0.2 setosa 
#>  5          5           3.6          1.4         0.2 setosa 
#>  6          5.4         3.9          1.7         0.4 setosa 
#>  7          4.6         3.4          1.4         0.3 setosa 
#>  8          5           3.4          1.5         0.2 setosa 
#>  9          4.4         2.9          1.4         0.2 setosa 
#> 10          4.9         3.1          1.5         0.1 setosa 
#> # … with 140 more rows

You may look back on your work and see room for improvement, but it’s
all good! The whole point of drake is to help you go back and change
things quickly and painlessly. For example, we forgot to give our
histogram a bin width.

readd(hist)
#> `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.

So let’s fix the plotting function.

create_plot <- function(data) {
  ggplot(data, aes(x = Petal.Width, fill = Species)) +
    geom_histogram(binwidth = 0.25) +
    theme_gray(20)
}

drake knows which results are
affected.

vis_drake_graph(plan) # Interactive graph: zoom, drag, etc.

The next make() just builds hist and report.html. No point in
wasting time on the data or model.

make(plan)
#> target hist
#> target report
loadd(hist)
hist

Reproducibility with confidence

The R community emphasizes reproducibility. Traditional themes include
scientific
replicability
,
literate programming with knitr, and
version control with
git.
But internal consistency is important too. Reproducibility carries the
promise that your output matches the code and data you say you used.
With the exception of non-default
triggers
and hasty
mode
, drake
strives to keep this promise.

Evidence

Suppose you are reviewing someone else’s data analysis project for
reproducibility. You scrutinize it carefully, checking that the datasets
are available and the documentation is thorough. But could you re-create
the results without the help of the original author? With drake, it is
quick and easy to find out.

make(plan)
#> unload targets from environment:
#>    hist
#> All targets are already up to date.

outdated(plan)
#> character(0)

With everything already up to date, you have tangible evidence of
reproducibility. Even though you did not re-create the results, you know
the results are re-creatable. They faithfully show what the code is
producing. Given the right package
environment
and system
configuration
,
you have everything you need to reproduce all the output by yourself.

Ease

When it comes time to actually rerun the entire project, you have much
more confidence. Starting over from scratch is trivially easy.

clean()    # Remove the original author's results.
make(plan) # Independently re-create the results from the code and input data.
#> target raw_data
#> target data
#> target fit
#> target hist
#> target report

Big data efficiency

Select specialized data formats to increase speed and reduce memory
consumption. In version 7.5.2.9000 and above, the available formats are
“fst” for data frames (example
below) and “keras” for Keras models
(example here).

library(drake)
n <- 1e8 # Each target is 1.6 GB in memory.
plan <- drake_plan(
  data_fst = target(
    data.frame(x = runif(n), y = runif(n)),
    format = "fst"
  ),
  data_old = data.frame(x = runif(n), y = runif(n))
)
make(plan)
#> target data_fst
#> target data_old
build_times(type = "build")
#> # A tibble: 2 x 4
#>   target   elapsed              user                 system    
#>   <chr>    <Duration>           <Duration>           <Duration>
#> 1 data_fst 13.93s               37.562s              7.954s    
#> 2 data_old 184s (~3.07 minutes) 177s (~2.95 minutes) 4.157s

History and provenance

As of version 7.5.2, drake tracks the history and provenance of your
targets: what you built, when you built it, how you built it, the
arguments you used in your function calls, and how to get the data back.
(Disable with make(history = FALSE))

history <- drake_history(analyze = TRUE)
history
#> # A tibble: 12 x 10
#>    target  current built  exists hash  command    seed runtime quiet output_file
#>    <chr>   <lgl>   <chr>  <lgl>  <chr> <chr>     <int>   <dbl> <lgl> <chr>      
#>  1 data    TRUE    2019-… TRUE   e580… "raw_da… 1.29e9 0.002   NA    <NA>       
#>  2 data    TRUE    2019-… TRUE   e580… "raw_da… 1.29e9 0.001   NA    <NA>       
#>  3 fit     TRUE    2019-… TRUE   486f… "lm(Sep… 1.11e9 0.00500 NA    <NA>       
#>  4 fit     TRUE    2019-… TRUE   486f… "lm(Sep… 1.11e9 0.00200 NA    <NA>       
#>  5 hist    FALSE   2019-… TRUE   22a2… "create… 2.10e8 0.011   NA    <NA>       
#>  6 hist    TRUE    2019-… TRUE   6909… "create… 2.10e8 0.00700 NA    <NA>       
#>  7 hist    TRUE    2019-… TRUE   6909… "create… 2.10e8 0.00900 NA    <NA>       
#>  8 raw_da… TRUE    2019-… TRUE   6317… "readxl… 1.20e9 0.0110  NA    <NA>       
#>  9 raw_da… TRUE    2019-… TRUE   6317… "readxl… 1.20e9 0.008   NA    <NA>       
#> 10 report  TRUE    2019-… TRUE   5251… "rmarkd… 1.30e9 0.969   TRUE  report.html
#> 11 report  TRUE    2019-… TRUE   5251… "rmarkd… 1.30e9 0.592   TRUE  report.html
#> 12 report  TRUE    2019-… TRUE   5251… "rmarkd… 1.30e9 0.619   TRUE  report.html

Remarks:

  • The quiet column appears above because one of the drake_plan()
    commands has knit(quiet = TRUE).
  • The hash column identifies all the previous versions of your
    targets. As long as exists is TRUE, you can recover old data.
  • Advanced: if you use make(cache_log_file = TRUE) and put the cache
    log file under version control, you can match the hashes from
    drake_history() with the git commit history of your code.

Let’s use the history to recover the oldest histogram.

hash <- history %>%
  filter(target == "hist") %>%
  pull(hash) %>%
  head(n = 1)
cache <- drake_cache()
cache$get_value(hash)
#> `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.

Reproducible data recovery and renaming

Remember how we made that change to our histogram? What if we want to
change it back? If we revert create_plot(), make(plan, recover = TRUE) restores the original plot.

create_plot <- function(data) {
  ggplot(data, aes(x = Petal.Width, fill = Species)) +
    geom_histogram()
}

# The report still needs to run in order to restore report.html.
make(plan, recover = TRUE)
#> recover hist
#> target report

readd(hist) # old histogram
#> `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.

drake’s data recovery feature is another way to avoid rerunning
commands. It is useful if:

  • You want to revert to your old code.
  • You accidentally clean() a target and want to get it back.
  • You want to rename an expensive target.

In version 7.5.2 and above, make(recover = TRUE) can salvage the
values of old targets. Before building a target, drake checks if you
have ever built something else with the same command, dependencies,
seed, etc. that you have right now. If appropriate, drake assigns the
old value to the new target instead of rerunning the command.

Caveats:

  1. This feature is still experimental.
  2. Recovery may not be a good idea if your external dependencies have
    changed a lot over time (R version, package environment, etc.).

Undoing clean()

# Is the data really gone?
clean()

# Nope! You need clean(garbage_collection = TRUE) to delete stuff.
make(plan, recover = TRUE)
#> recover raw_data
#> recover data
#> recover fit
#> recover hist
#> recover report

# When was the raw data *really* first built?
diagnose(raw_data)$date
#> [1] "2019-12-25 00:15:19.668259 -0500 GMT"

Renaming

You can use recovery to rename a target. The trick is to supply the
random number generator seed that drake used with the old target name.
Also, renaming a target unavoidably invalidates downstream targets.

# Get the old seed.
old_seed <- diagnose(data)$seed

# Now rename the data and supply the old seed.
plan <- drake_plan(
  raw_data = readxl::read_excel(file_in("raw_data.xlsx")),
  
  # Previously just named "data".
  iris_data = target(
    raw_data %>%
      mutate(Species = forcats::fct_inorder(Species)),
    seed = !!old_seed
  ),

  # `iris_data` will be recovered from `data`,
  # but `hist` and `fit` have changed commands,
  # so they will build from scratch.
  hist = create_plot(iris_data),
  fit = lm(Sepal.Width ~ Petal.Width + Species, iris_data),
  report = rmarkdown::render(
    knitr_in("report.Rmd"),
    output_file = file_out("report.html"),
    quiet = TRUE
  )
)

make(plan, recover = TRUE)
#> recover iris_data
#> target fit
#> target hist
#> target report

Independent replication

With even more evidence and confidence, you can invest the time to
independently replicate the original code base if necessary. Up until
this point, you relied on basic drake functions such as make(), so
you may not have needed to peek at any substantive author-defined code
in advance. In that case, you can stay usefully ignorant as you
reimplement the original author’s methodology. In other words, drake
could potentially improve the integrity of independent replication.

Readability and transparency

Ideally, independent observers should be able to read your code and
understand it. drake helps in several ways.

  • The drake
    plan

    explicitly outlines the steps of the analysis, and
    vis_drake_graph()
    visualizes how those steps depend on each other.
  • drake takes care of the parallel scheduling and high-performance
    computing (HPC) for you. That means the HPC code is no longer
    tangled up with the code that actually expresses your ideas.
  • You can generate large collections of
    targets
    without
    necessarily changing your code base of imported functions, another
    nice separation between the concepts and the execution of your
    workflow

Scale up and out.

Not every project can complete in a single R session on your laptop.
Some projects need more speed or computing power. Some require a few
local processor cores, and some need large high-performance computing
systems. But parallel computing is hard. Your tables and figures depend
on your analysis results, and your analyses depend on your datasets, so
some tasks must finish before others even begin. drake knows what to
do. Parallelism is implicit and automatic. See the high-performance
computing guide
for all the
details.

# Use the spare cores on your local machine.
make(plan, jobs = 4)

# Or scale up to a supercomputer.
drake_hpc_template_file("slurm_clustermq.tmpl") # https://slurm.schedmd.com/
options(
  clustermq.scheduler = "clustermq",
  clustermq.template = "slurm_clustermq.tmpl"
)
make(plan, parallelism = "clustermq", jobs = 4)

With Docker

drake and Docker are compatible and complementary. Here are some
examples that run drake inside a Docker
image.

Alternatively, it is possible to run drake outside Docker and use the
future package to send
targets to a Docker image. drake’s
Docker-psock
example demonstrates how. Download the code with
drake_example("Docker-psock").

Installation

You can choose among different versions of drake. The CRAN release
often lags behind the online manual
but may have fewer bugs.

# Install the latest stable release from CRAN.
install.packages("drake")

# Alternatively, install the development version from GitHub.
install.packages("devtools")
library(devtools)
install_github("ropensci/drake")

Function reference

The reference
section
lists all
the available functions. Here are the most important ones.

  • drake_plan(): create a workflow data frame (like my_plan).
  • make(): build your project.
  • drake_history(): show what you built, when you built it, and the
    function arguments you used.
  • r_make(): launch a fresh
    callr::r() process to build your
    project. Called from an interactive R session, r_make() is more
    reproducible than make().
  • loadd(): load one or more built targets into your R session.
  • readd(): read and return a built target.
  • vis_drake_graph(): show an interactive visual network
    representation of your workflow.
  • recoverable(): Which targets can we salvage using make(recover = TRUE) (experimental).
  • outdated(): see which targets will be built in the next make().
  • deps(): check the dependencies of a command or function.
  • failed(): list the targets that failed to build in the last
    make().
  • diagnose(): return the full context of a build, including errors,
    warnings, and messages.

Documentation

Use cases

The official rOpenSci use cases and
associated discussion threads
describe applications of drake in action. Here are some more
applications of drake in real-world
projects.

Help and troubleshooting

The following resources document many known issues and challenges.

If you are still having trouble, please submit a new
issue
with a bug report
or feature request, along with a minimal reproducible example where
appropriate.

The GitHub issue tracker is mainly intended for bug reports and feature
requests. While questions about usage etc. are also highly encouraged,
you may alternatively wish to post to Stack
Overflow
and use the drake-r-package
tag
.

Contributing

Development is a community effort, and we encourage participation.
Please read
CONTRIBUTING.md
for details.

Similar work

drake enhances reproducibility and high-performance computing, but not
in all respects. Literate programming,
local library managers,
containerization, and strict session
managers
offer more robust
solutions in their respective domains. And for the problems drake
does solve, it stands on the shoulders of the giants that came before.

Pipeline tools

GNU Make

The original idea of a time-saving reproducible build system extends
back at least as far as GNU Make,
which still aids the work of data
scientists

as well as the original user base of complied language programmers. In
fact, the name “drake” stands for “Data Frames in R for Make”.
Make is used widely in reproducible
research. Below are some examples from Karl Broman’s
website
.

Whereas GNU Make is
language-agnostic, drake is fundamentally designed for R.

  • Instead of a
    Makefile,
    drake supports an R-friendly domain-specific
    language

    for declaring targets.
  • Targets in GNU Make are files,
    whereas targets in drake are arbitrary variables in memory.
    (drake does have opt-in support for files via file_out(),
    file_in(), and knitr_in().) drake caches these objects in its
    own storage system so R users
    rarely have to think about output files.

Remake

remake itself is no longer
maintained, but its founding design goals and principles live on through
drake. In fact,
drake is a direct reimagining of
remake with enhanced scalability,
reproducibility, high-performance computing, visualization, and
documentation.

Factual’s Drake

Factual’s Drake is similar in
concept, but the development effort is completely unrelated to the
drake R package.

Other pipeline tools

There are countless other successful pipeline
toolkits
. The drake
package distinguishes itself with its R-focused approach,
Tidyverse-friendly interface, and a thorough selection of parallel
computing technologies and scheduling
algorithms
.

Memoization

Memoization is the strategic caching of the return values of functions.
It is a lightweight approach to the core problem that drake and other
pipeline tools are trying to solve. Every time a memoized function is
called with a new set of arguments, the return value is saved for future
use. Later, whenever the same function is called with the same
arguments, the previous return value is salvaged, and the function call
is skipped to save time. The
memoise package is the primary
implementation of memoization in R.

Memoization saves time for small projects, but it arguably does not go
far enough for large reproducible pipelines. In reality, the return
value of a function depends not only on the function body and the
arguments, but also on any nested functions and global variables, the
dependencies of those dependencies, and so on upstream. drake tracks
this deeper context, while memoise
does not.

Literate programming

Literate programming is the practice
of narrating code in plain vernacular. The goal is to communicate the
research process clearly, transparently, and reproducibly. Whereas
commented code is still mostly code, literate
knitr / R
Markdown
reports can become websites,
presentation slides, lecture notes, serious scientific manuscripts, and
even books.

knitr and R Markdown

drake and knitr are symbiotic. drake’s
job is to manage large computation and orchestrate the demanding tasks
of a complex data analysis pipeline.
knitr’s job is to communicate those
expensive results after drake computes them.
knitr / R
Markdown
reports are small pieces of an
overarching drake pipeline. They should focus on communication, and
they should do as little computation as possible.

To insert a knitr report in a drake
pipeline, use the knitr_in() function inside your drake
plan
, and use loadd()
and readd() to refer to targets in the report itself. See an example
here
.

Version control

drake is not a version control tool. However, it is fully compatible
with git,
svn, and similar
software. In fact, it is good practice to use
git alongside drake for reproducible
workflows.

However, data poses a challenge. The datasets created by make() can
get large and numerous, and it is not recommended to put the .drake/
cache or the .drake_history/ logs under version control. Instead, it
is recommended to use a data storage solution such as
DropBox or
OSF.

Containerization and R package environments

drake does not track R packages or system dependencies for changes.
Instead, it defers to tools like Docker,
Singularity,
renv, and
packrat, which create
self-contained portable environments to reproducibly isolate and ship
data analysis projects. drake is fully compatible with these tools.

workflowr

The workflowr package is a
project manager that focuses on literate programming, sharing over the
web, file organization, and version control. Its brand of
reproducibility is all about transparency, communication, and
discoverability. For an example of
workflowr and drake
working together, see this machine learning
project
by Patrick
Schratz
.

Acknowledgements

Special thanks to Jarad Niemi, my advisor from
graduate school, for first introducing me
to the idea of Makefiles for
research. He originally set me down the path that led to drake.

Many thanks to Julia Lowndes, Ben
Marwick
, and Peter
Slaughter
for reviewing drake for
rOpenSci
, and to
Maëlle Salmon for such active involvement
as the editor. Thanks also to the following people for contributing
early in development.

Credit for images is attributed
here
.

ropensci\_footer

Main metrics

Overview
Name With Ownerropensci/drake
Primary LanguageR
Program languageR (Language Count: 3)
Platform
License:GNU General Public License v3.0
所有者活动
Created At2017-02-20 22:28:40
Pushed At2024-12-04 11:25:31
Last Commit At
Release Count63
Last Release Name7.13.11 (Posted on )
First Release Namev2.0.0 (Posted on )
用户参与
Stargazers Count1.3k
Watchers Count33
Fork Count128
Commits Count6.8k
Has Issues Enabled
Issues Count1059
Issue Open Count0
Pull Requests Count287
Pull Requests Open Count0
Pull Requests Close Count27
项目设置
Has Wiki Enabled
Is Archived
Is Fork
Is Locked
Is Mirror
Is Private