THIS WEEK: Natalie Kelly on speeding up R for lazy people
Most of us can tolerate slow(ish) R code we’ve written ourselves if: a) we’re busy and don’t have time/energy to improve it (i.e., it runs—eventually—so, what’s the problem?); or b) we don’t need to share it around much, or even expect others to pay for it. It doesn’t necessarily mean you are terrible at writing code; let’s face it, R doesn’t have a reputation for scorching computational speed (but, of course, that’s not why we love it so). Recently, however, I had to write code to process large raster files, and given it was for an external contract, the code needed to be faster and sparklier than anything I’d ever produced. What ensued were a series of painful but, ultimately, valuable lessons in speeding up R, including forays into parallel processing, Rcpp and tools for dealing with stupid-big matrices. Big improvements are possible with tools that other (infinitely more clever!) people have written, and this discussion will cover some of those.
When: Friday 8 April 2016, from 0915 to 1015
Where: the ground floor Flex Room at IMAS / ACE CRC Salamanca ‘the Waterfront’.
… a talk on form-mastery
… a session on Git
… building R packages in the hadleyverse
… links to previous posts . . .
… lots of data sciencey stuff
If you have a topic you’d like to present get in touch on @DataScienceHbt Twitter or join the mailing list.
Data Science Hobart