Finding relations between variables in time series
|Linked pages on this wiki||,
Most personal science projects require finding relationships between different variables of the type 'time series'. An example could be the question "does my daily chocolate consumption correlate with my daily focus score?".
You could do experiments if you control everything rigidly or if the effects are strong and quick, like less than a week. Old data may be useable as Baseline.
Finding more complicated relationships require better statistical tests and algorithms and data science skills. Apps that would do this automatically or at least easily are not yet available. See below. Most internet resources treat time series as (regular cyclical) series, which is not useful as most of the tracked variables have irregular patterns and don't even have a regularly cyclical component.
To do anything mentioned above you need to have your data parsed, cleaned, all in one place, and ideally even visualized.
List of less technical tools[edit | edit source]
There are also a number of tools or apps that can semi-automatically perform these correlations and help in doing the analyses.
Open Humans and their Personal Analysis notebooks[edit | edit source]
Open Humans provides a library of notebooks that can be used to visualize data across data sources and find relations between different variables. It also supports the upload of generic data files through the File Uploader.
Zenobase can test correlations based on user-specified questions. User must configure lag, regression method and aggregation method using a UI. Powerful filtering tools too.
Curedao[edit | edit source]
github.com/curedao/decentralized-fda Correlation over bins and lags selecting the biggest effect.
Data Flexor[edit | edit source]
DataFlexor Lots of pretty pictures. Not super advanced statistics yet.
"Baysian nodes, 'do' semantics, AI and experts" Right now only for sports teams.
From the Exist.io main site :"Which habits go together? Correlations are the most powerful part of Exist. By combining your data, we can answer questions like: “What makes me happiest?”, “What can I do to be more active?”, “When am I most productive?”"
Habitdash's Automatic data analysis searches for hidden patterns to find relationships between activity, sleep, weight and other habits.
Optimized claims to do "automatic correlation mining"
lytiko.com promises correlations connections deep insights and visualizations.
Vital[edit | edit source]
Vital (https://tryvital.io). API for health and fitness data. Free to use API for collecting wearables and health data and standardising them into one API. You can also use Vital's API for delivering at-home test kits.
young.ai and aging.ai[edit | edit source]
deep learning predictor of age based on human blood tests and young.ai makes recommendations
List of very technical tools[edit | edit source]
Some people do all the data science by themselves, by using programming languages such as R and Python in notebooks or apps. Coding platforms such as the notebooks on Open Humans, Kaggle or GitHub can help.
Programming languages for statistics; Matlab, R, Python, Julia.
Reasons time series analysis especially as applied to QS is hard[edit | edit source]
Wavelet coherence is one potential solution.
Spurious Correlations mostly shows that if two things are trending in one direction and are checked for correlation they will show a very significant correlation. Practice effect is a subset. Another is one instance of an event increases the chances of the same event happening soon after. Economists suggest unit root.
Lag. What if eating pizza on one day causes heartburn the next?
Few positive instances but they are important. Went to a specific restaurant twice got sick soon after twice. Only ever got sick with similar symptoms five times. Or. Two large rare humps happen almost one after the other, similar to previous example if treated as events, adding the fact that lots of samples showing their similarity in shape too.
Different sampling rates need to be interpolated to be compared. Window. Since removing the effects of other variables makes the variable of interest's effect stand out, machine learning must be used. Common approach would be to bin predictor variables multiple ways based on time from effect being checked, mean or other aggregator method and window of the aggregator.
Machine learning also has limits on the kind of patters it can detect.
Types of data. [Exercised] is an event with specific occurrence moment and length while [tired] is a vaguer value user could use to try to describe feelings past 4 hours.
What to expect from the complete analysis tool[edit | edit source]
User without experience in statistical analysis will not be able to tell the difference between correctly computed correlations and poorly computed ones. However, a genuinely complete analysis produces graphs which should include most of the following:
Interpolation for irregular time series.
Change point or breakpoint detection.
Outlier detection. Smoothing.
Removal of effect of variables found to correlate with this one to show residuals.
Cycles decomposition using a model like ARIMA. Ex. kayak season is in the summer or lunch is at exactly 1pm.
Detection of repeated shapes implying similar events that are not cyclical; like dinner is anywhere between 4pm and 10pm and causes a particular 2 hour spike in glucose.