I have a moderate sized database with many joins and lookup tables.
I am more familiar with R than with SQL, and I am using MySQL.
At what point is it beneficial to stop increasing the complexity of an SQL statement in favor of the data subsetting functionality in R (e.g.,
dlply, etc.)in R.
On one hand, SQL's join is easier than selecting all contents of each table and using the R
merge function to join them. Also, doing the conditional selects in SQL would reduce the amount of data that has to be imported to R; but the speed difference is not significant.
On the other hand, a big join with a complex where clause becomes less easy to understand than the R syntax.
Below I have some untested code for illustrative purposes: I am asking this question at before having working code, and the answer to my question doesn't require working code (although this is always appreciated) - the "most elegant approach", "fewest lines", or "amazing implementation of X" are always appreciated, but what I am particularly interested in is the "most sensible / practical / canonical / based on first principles" rationale.
I am interested in the general answer of which steps should use a SQL
where clause and which steps would be easier to accomplish using R.
there are three tables:
b each have a primary key
id. They have a many-many relationship that is represented by a lookup table,
ab, which contains fields
ab.b_id that join to
b.id, respectively. Both tables have a
time field, and a has a
Here is a minimal example of the join and subsetting that I want to do;
(MySQL naming of elements, e.g.
a.id is equivalent to
a$id in R)
ab, appending multiple values of
b.timeassociated with each
a.idas a new column;
select a_time, b.time, a.id, b.id from a join ab on a.id = ab.a_id join b on b.id = ab.b_id and then append b.time for distinct values of b.id;
I don't need repeated values of b.time, I only need a value of
b.max: for repeated values of
b.timejoined to each
b.maxis the value of
b.timeclosest to but not greater than
b.max <- max(b.time[b.time < a.time))
- append the value
dt <- a.time - b.maxto the table, for example, in R,
for each distinct value in
a.group, select which(min(x.dt)))
x.dt <- a.time - b.max
I usually do the data manipulations in SQL until the data I want is in a single table, and then, I do the rest in R. Only when there is a performance issue do I start to move some of the computations to the database. This is already what you are doing.
Computations involving timestamps often
become unreadable in SQL
(the "analytic functions", similar to
are supposed to simplify this,
but I think they are not available in MySQL).
However, your example can probably be written entirely in SQL as follows (not tested).
-- Join the tables and compute the maximum CREATE VIEW t1 AS SELECT a.id AS a_id, a.group AS a_group, b.id AS b_id, a.time AS a_time, a.time - MAX(b.time) AS dt FROM a, b, ab WHERE a.id = ab.a_id AND b.id = ab.b_id AND b.time < a.time GROUP BY a.id, a.group, b.id; -- Extract the desired rows CREATE VIEW t2 AS SELECT t1.* FROM t1, (SELECT group, MIN(dt) AS min_dt FROM t1) X WHERE t1.a_id = X.a_id AND t1.b_id = X.b_id AND t1.a_group = X.a.group;