Finding relations between variables in time series

From Personal Science Wiki
Jump to navigation Jump to search
Topic Infobox Question-icon.png
Linked pages on this wiki Tools (0),

Projects (0),

People (0)

Most personal science projects require finding relationships between different variables of the type 'time series'[1]. An example could be the question "does my daily chocolate consumption correlate with my daily focus score?".

You could do experiments if you control everything rigidly or if the effects are strong and quick, like less than a week. Old data may be useable as Baseline and a baseline may rule out some issues. If both block (like 2 weeks) and daily mixed (random intervention every day) produce the same results then issues of time series are probably not in your experiment.

Finding more complicated relationships require better statistical tests and algorithms and data science skills. Apps that would do this automatically or at least easily are not yet available. See below. Most internet resources treat time series as (regular cyclical) series, which is not useful as most of the tracked variables have irregular patterns and don't even have a regularly cyclical component.

To do anything mentioned above you need to have your data parsed, cleaned, all in one place, and ideally even visualized.

List of less technical tools[edit | edit source]

There are also a number of tools or apps that can semi-automatically perform these correlations and help in doing the analyses.

Open Humans and their Personal Analysis notebooks[edit | edit source]

Open Humans provides a library of notebooks that can be used to visualize data across data sources and find relations between different variables. It also supports the upload of generic data files through the File Uploader.

Zenobase[edit | edit source]

Zenobase can test correlations based on user-specified questions. User must configure lag, regression method and aggregation method using a UI. Powerful filtering tools too.

Curedao[edit | edit source]

Curedao. Correlation over bins and lags selecting the biggest effect.

Data Flexor[edit | edit source]

DataFlexor Lots of pretty pictures. Not super advanced statistics yet.

HALE.life[edit | edit source]

"Baysian nodes, 'do' semantics, AI and experts"[2] Right now only for sports teams.

Exist.io [3][edit | edit source]

From the Exist.io main site :"Which habits go together? Correlations are the most powerful part of Exist. By combining your data, we can answer questions like: “What makes me happiest?”, “What can I do to be more active?”, “When am I most productive?”"

Habitdash[edit | edit source]

Habitdash's Automatic data analysis searches for hidden patterns to find relationships between activity, sleep, weight and other habits.

Optimized app[edit | edit source]

Optimized claims to do "automatic correlation mining"

Lytiko[edit | edit source]

lytiko.com promises correlations connections deep insights and visualizations.

Vital[edit | edit source]

Vital (https://tryvital.io). API for health and fitness data. Free to use API for collecting wearables and health data and standardising them into one API. You can also use Vital's API for delivering at-home test kits.

young.ai and aging.ai[edit | edit source]

Deep learning predictor of age based on human blood tests, young.ai also makes recommendations.

Sonar sonarhealth.co[edit | edit source]

Customizable aggregation and syncing like weigh fitbit twice as much as apple watch or average steps instead of sum.

Gyroscope

Welltory

Bearable App

Realize Me

Heads Up

Inside Tracker

Wellness FX

Export from Apple Health[4] (no analysis)

List of very technical tools[edit | edit source]

Some people do all the data science by themselves, by using programming languages such as R and Python in notebooks or apps. Coding platforms such as the notebooks on Open Humans, Kaggle, rpubs, or GitHub can help. So can GUI like Python's Orange.

Programming languages for statistics; Matlab, R, Python, Julia.

Try Python GUI time series analysis .

DIY Individuals[edit | edit source]

Reasons time series analysis especially as applied to QS is hard[edit | edit source]

Wavelet coherence is one potential solution.

Really strong relationships will be detected even through most of these problems.

Spurious Correlations mostly shows that if two things are trending in one direction and are checked for correlation they will show a very significant correlation. Practice effect is a subset. Another is one instance of an event type A increases the chances of the same event type happening soon after. Economists suggest unit root.

Effects on target variable from outside known variables. In non time series this is compensated for with RCT but in time series such an effect may last a while and coincide with an intervention causing very false results. This problem makes baseline data gathering more difficult and also necessary. Sometimes a baseline will show that this issue does not occur for a particular target variable. Alternatively experimenter could compensate by strictly controlling all possible sources of variance.

Lag. What if eating pizza on one day causes heartburn the next?

Build up. What if it takes two days of eating pizza to cause heartburn?

Bin. Window. Smooth. Variables only make domain sense as aggregate over some time. Variables have a really high sampling rate.

Interpolate. Variables have different sampling rates so need to be interpolated to be compared.

Types of data. [Exercised] is an event with specific occurrence moment and length while [tired] is a vaguer value user could use to try to describe feelings past 4 hours.

All the Issues with Self Report .

Few positive instances but they are important. Went to a specific restaurant twice got sick soon after twice. Only ever got sick with similar symptoms five times. Or. Two large rare humps happen almost one after the other, similar to previous example if treated as events, adding the fact that lots of samples showing their similarity in shape too.

Since removing real effects of other variables on target variable makes the variable of interest's effect stand out, 'machine learning' needs to be used. Basic approach would be to bin predictor variables multiple ways based on time from effect being checked, mean or other aggregator method and window of the aggregator.

Machine learning also has limits on the kind of patterns it can detect.

What to expect from the complete analysis tool[edit | edit source]

User without experience in statistical analysis will not be able to tell the difference between correctly computed correlations and poorly computed ones. However, a genuinely complete analysis produces plots which should include at least some of the following:

Interpolation for irregular time series.

Change point or breakpoint detection.

Outlier detection. Smoothing.

Removal of effect of variables found to correlate with this one to show residuals.

Cycles decomposition using a model like ARIMA. Ex. kayak season is in the summer or lunch is at exactly 1pm.

Detection of repeated shapes implying similar events that are not cyclical; like dinner is anywhere between 4pm and 10pm and causes a particular 2 hour spike in glucose.

References[edit | edit source]

  1. Core-Guide_Longitudinal-Data-Analysis_10-05-17.pdf (duke.edu)
  2. https://www.hale.life/
  3. https://github.com/ejain/n-of-1-ml
  4. github.com/Lybron/health-auto-export