One fun application of tree-based methods is abstracting over art and other images. For some really striking examples, take a look at the portfolio of designer Dimitris Ladopoulos. This vignette will show you how to implement the same basic ideas using parttree and a few friends. Here are the packages that we’ll be using.
library(parttree) # This package
library(rpart) # For decision trees
library(magick) # For reading and manipulating images
library(imager) # Another image library, with some additional features
While the exact details will vary depending on the image at hand, the essential recipe for this type of art abstraction is as follows:
Our first example, will mimic one of Dimitri Ladopoulos’s aforementioned portrait pieces. Specifically, we will abstract over a close-up (eyes only) of Rembrandt Peale’s portrait of his daughter, Rosalba.
We can download a digital image of the original artwork from Wikimedia. I’ll download a high-res version to show off, but feel free to go for a lower resolution if you want to reduce modeling and plotting times.1 However, I will crop the image around Rosalba’s eyes to get a close-up and reduce the overall complexity of the exercise.2
# Download the image
rosalba = image_read("https://upload.wikimedia.org/wikipedia/commons/a/aa/Rembrandt_Peale_-_Portrait_of_Rosalba_Peale_-_Google_Art_Project.jpg")
# Crop around the eyes
rosalba = image_crop(rosalba, "855x450+890+1350")
# Convert to cimg (better for in-memory manipulation)
rosalba = magick2cimg(rosalba)
# Display
rosalba
#> Image. Width: 855 pix Height: 450 pix Depth: 1 Colour channels: 3
plot(rosalba, axes = FALSE)
With our cropped image in hand, let’s walk through the 4-step recipe from above.
Step 1 is converting the image into a data frame.
# Coerce to data frame
rosalba_df = as.data.frame(rosalba)
# Round color values to ease up work for decision tree
rosalba_df$value = round(rosalba_df$value, 4)
head(rosalba_df)
#> x y cc value
#> 1 1 1 1 0.3059
#> 2 2 1 1 0.3059
#> 3 3 1 1 0.2627
#> 4 4 1 1 0.2471
#> 5 5 1 1 0.3137
#> 6 6 1 1 0.2941
Step 2 is splitting the image by RGB colour channel. This is the
cc
column above, where 1=Red, 2=Green, and 3=Blue.
rosalba_ccs = split(rosalba_df, rosalba_df$cc)
# We have a list of three DFs by colour channel. Uncomment if you want to see:
# str(rosalba_css)
Step 3 is fitting a decision tree (or similar model) on each of our colour channel data frames. The tuning parameters that you give your model are a matter of experimentation. Here I’m giving it a low complexity parameter (so we see more variation in the final predictions) and trimming each tree to a maximum depth of 30 nodes. The next code chunk takes about 15 seconds to run on my laptop, but should be much quicker if you downloaded a lower-res image.
## Start creating regression tree for each color channel. We'll adjust some
## control parameters to give us the "right" amount of resolution in the final
## plots.
trees = lapply(
rosalba_ccs,
function(d) rpart(value ~ x + y, data=d, control=list(cp=0.00001, maxdepth=20))
)
Step 4 is using our model (colour) predictions to construct our abstracted art piece. I was bit glib about it earlier, since it really involves a few sub-steps. First, let’s grab the predictions for each of our trees.
We’re going to use these predictions to draw the abstracted
(downscaled) version of our image.3 Probably the easiest way to do this is by
taking the predictions and overwriting the “value” column of our
original (pre-split) rosalba_df
data frame. We can then
coerce the data frame back into a cimg
object, which comes
with a bunch of nice plotting methods.
# The pred object is a list, so we convert it to a vector before overwriting the
# value column of the original data frame
rosalba_df$value = do.call("c", pred)
# Convert back into a cimg object, with the predictions providing pixel values
pred_img = as.cimg(rosalba_df)
#> Warning in as.cimg.data.frame(rosalba_df): Guessing image dimensions from
#> maximum coordinate values
Now we’re ready to draw our abstracted art piece. It’s also where parttree will enter the fray, since this is what we’ll be using to highlight the partitioned areas of the downscaled pixels. Here’s how we can do it using base R plotting functions.
## Maximum/minimum for plotting range as base rect() does not handle Inf well
m = 1000
# get a list of parttree data frames (one for each tree)
pts = lapply(trees, parttree)
#> Warning in node.match(nodes, node): supplied nodes 100000 are not in this tree
## first plot the downscaled image...
plot(pred_img, axes = FALSE)
## ... then layer the partitions as a series of rectangles
lapply(
pts,
function(pt) rect(
pmax(-m, pt$xmin), pmax(-m, pt$ymin), pmin(m, pt$xmax), pmin(m, pt$ymax),
lwd = 0.06, border = "grey15"
)
)
#> $`1`
#> NULL
#>
#> $`2`
#> NULL
#>
#> $`3`
#> NULL
We can achieve the same effect with ggplot2 if you prefer to use that.
ggplot() +
annotation_raster(pred_img, ymin=-Inf, ymax=Inf, xmin=-Inf, xmax=Inf) +
lapply(trees, function(d) geom_parttree(data = d, lwd = 0.05, col = "grey15")) +
scale_x_continuous(limits=c(0, max(rosalba_df$x)), expand=c(0,0)) +
scale_y_reverse(limits=c(max(rosalba_df$y), 0), expand=c(0,0)) +
coord_fixed(ratio = 0.9) +
theme_void()
#> Warning in node.match(nodes, node): supplied nodes 100000 are not in this tree
Now that we’ve walked through a detailed example, let’s quickly go through one more. We’re about to get very meta: Visualizing the decision tree partitions of an actual tree image. This also happens to be the basis for the parttree logo, which seems a nice way to bookend the vignette.
Here are the main image conversion, modeling, and prediction steps. These follow the same recipe that we saw in the previous portrait example, so I won’t repeat my explanations.
bonzai = load.image("https://upload.wikimedia.org/wikipedia/commons/thumb/0/0c/Japanese_Zelkova_bonsai_16%2C_30_April_2012.JPG/480px-Japanese_Zelkova_bonsai_16%2C_30_April_2012.JPG")
plot(bonzai, axes = FALSE)
# Coerce to data frame
bonzai_df = as.data.frame(bonzai)
# Round color values to ease up work for decision trees
bonzai_df$value = round(bonzai_df$value, 2)
# Split into colour channels
bonzai_ccs = split(bonzai_df, bonzai_df$cc)
# Fit decision trees on each colour channel DF. Note that I'm deliberating
# truncating the max tree depth to reduce the number of partitions.
bonzai_trees = lapply(
bonzai_ccs,
function(d) rpart(value ~ x + y, data=d, control=list(cp=0.00001, maxdepth=10))
)
# Overwrite value column with predictions vector
bonzai_df$value = do.call("c", lapply(bonzai_trees, predict))
# Convert back into a cimg object, with the predictions providing pixel values
bonzai_pred_img = as.cimg(bonzai_df)
#> Warning in as.cimg.data.frame(bonzai_df): Guessing image dimensions from
#> maximum coordinate values
At this point you may tempted to plot the figure with
plot(bonzai_pred_img)
, which would definitely work.
However, I’m going to show you one additional tweak to make the
partition rectangles look a bit nicer: Use mean color channel values
instead of black. Here’s a function for grabbing this information from a
cimg
object that has been converted to a data frame.4
mean_cols = function(dat) {
mcols = tapply(dat$value, dat$cc, FUN = "mean")
col1 = rgb(mcols[1], mcols[2], mcols[3])
hicol = function(x) mean(x) + 0.4*sd(subset(dat, cc==1)$value)
col2 = rgb(hicol(subset(dat, cc==1)$value), hicol(subset(dat, cc==2)$value), hicol(subset(dat, cc==3)$value))
locol = function(x) mean(x) - 0.4*sd(subset(dat, cc==1)$value)
col3 = rgb(locol(subset(dat, cc==1)$value), locol(subset(dat, cc==2)$value), locol(subset(dat, cc==3)$value))
return(c(col1, col2, col3))
}
bonzai_mean_cols = mean_cols(bonzai_df)
And now we can plot everything together.
ggplot() +
annotation_raster(bonzai_pred_img, ymin=-Inf, ymax=Inf, xmin=-Inf, xmax=Inf) +
Map(function(d, cols) geom_parttree(data = d, lwd = 0.1, col = cols), d = bonzai_trees, cols = bonzai_mean_cols) +
scale_x_continuous(limits=c(0, max(bonzai_df$x)), expand=c(0,0)) +
scale_y_reverse(limits=c(max(bonzai_df$y), 0), expand=c(0,0)) +
coord_fixed() +
theme_void()
The individual trees for each colour channel make for nice stained glass prints…
library(patchwork)
## Aside: We're reversing the y-scale since higher values actually correspond
## to points lower on the image, visually.
g =
Map(
function(tree, pal) {
ggplot() +
geom_parttree(data = tree, aes(fill=-value), alpha = 0.5, lwd = 0.1) +
scale_fill_distiller(palette = pal) +
scale_y_reverse() +
coord_fixed() +
theme_void() +
theme(legend.position = 'none') +
labs(title = substr(pal, 1, 1))
},
tree = bonzai_trees,
pal = c("Reds", "Greens", "Blues")
)
g[[1]] + g[[2]] + g[[3]]
For example, you can download this
version instead and adapt your cropping command to
rosalba = image_crop(rosalba, "170x90+170+260")
.↩︎
One minor faff is that I’m using two separate packages—magick and imager to prep the image. Without going into the details, just know that the former is better for cropping the image the way I want, while the latter is better for R’s in-memory manipulation.↩︎
Remember: the tree model predictions yield “average” colours for small sub-regions of the original image. This is how we get the downscaling effect.↩︎
I honestly can’t remember where I first saw this trick or adapted this function from. But it works particularly well in cases like this where we want the partition lines to blend in with the main image.↩︎