Self-Experiments With Sleep, Cognition, and Fasting

From Personal Science Wiki
Jump to navigation Jump to search
Project Infobox Question-icon.png
Self researcher(s) Ariel Berwaldt
Related tools zeo, blood glucose monitor
Related topics Sleep, Cognition

Builds on project(s)
Has inspired Projects (0)
Show and Tell Talk Infobox
Featured image
Date 2013/01/29
Event name Bay Area Meetup
Slides
UI icon information.png This content was automatically imported. See here how to improve it if any information is missing or out outdated.

Self-Experiments With Sleep, Cognition, and Fasting is a Show & Tell talk by Ariel Berwaldt that has been imported from the Quantified Self Show & Tell library.The talk was given on 2013/01/29 and is about Sleep, and Cognition.

Description[edit | edit source]

A description of this project as introduced by Quantified Self follows:

Ari Berwaldt has been suffering from consistent fatigue and mental fog, and was diagnosed with sleep apnea. Then a few months ago he found the Quantified Mind website. In this video, Ari talks about using Zeo and Quantified Mind to measure effects of sleep on cognition, and shows fascinating data about lack of expected correlation. He also talks briefly about fasting and blood glucose experiment that shows poor accuracy of blood glucose monitors.

Video and transcript[edit | edit source]

A transcript of this talk is below:

Ari Berwaldt

Self-Experiments With Sleep Cognition and Fasting For quite a while I’ve been suffering from consistent fatigue and mental fog, and I wasn’t quite sure what it was and then about two years ago I was diagnosed with sleep apnea. So I’ve been treating that with a CPAP and it’s not been too helpful as (I’ve mental signs again?), it hasn’t produced, you know I don’t feel like I did when I was 20. So we have the CPAP and I’ve always been interested in cognitive science and data science so I decided to track myself, both my sleep using a Zeo and my mental performance using a program called Brain Workshop. Then about three months ago I learned of a website called Quantified Mind, which has a very comprehensive way to measure cognitive performance. So it has 25 different tests and you can track multiple aspects of your mental performance. So I’ve been using that for about three months now. Here’s an example of some of the tests available on the Quantified Mind, so you have for instance choices of reaction time. It measures how long it takes you to respond to one of the three circles turning green, and you have an N-back which tests, which you have to remember various stimuli and then remember what happened in trials previously and if they match. So it tests a lot, things like working memory, reaction time and that kind of stuff. So over the three months my scores well, some of them have improved and some of them have stayed the same, so each one is kind of unique in that way. For instance in green we have a one back score. It’s shown some improvement and red can have attentional focus which has remained, for me it has remained really steady so I haven’t really improved on that at all, and then design copy I’ve made tremendous gains in there. So looking a little more detailed at the Quantified Mind test scores and seeing how they relate to each other. So we have this just shows the correlation between the different tests. Red is very highly positively correlated, green is negatively correlated and block is uncorrelated. So for instance choice reaction time and mental rotation very strongly correlated positively. Cued attention and mental rotation negatively correlated, and design copy and mental rotation are pretty much uncorrelated. These are some of the additional of the 25 tests, and there’s two striking features (for me?) apparent and one is that the N-backs, so the one-back, two-back, three-back are all correlated as you would expect, they’re also collated strongly. But something very interesting which I don’t have an explanation for is that mental rotation seems to be negatively correlated with all the other tests, and surprising and I’m not sure why. Now, I wanted to combine my Zeo data and see if I could predict my Quantified Mind data and to do that I created this thing called Aggregated Mental Performance, which is a simple average of all the different Quantified Mind tests, and as you can see there’s a lot of black which means not a lot of correlation. So right down here, Aggregated Mental Performance and then the various Zeo components like time in REM, time in deep and there’s really not that much going on. So this is another perspective on relating the Zeo data to the mental performance, and here we have on the y-axis we have time in REM, minutes and on the x-axis we have time in deep in minutes. And the interesting thing is that my lowest REM in deep score actually was one of my highest mental performance scores, so again somewhat counterintuitive. So using all the Zeo data in this monster equation which is the best one I could find for predicting the mental performance score, I got a RSquare of 0.3, which kind of confirms what we had seen earlier which is that the Zeo and the mental performance are not that closely related especially when you consider the practice effects. So this is how my score has changed over time and you can see there the RSquare score is 0.43, so basically it’s saying the practice is more important than sleep at least measured by the Zeo. However the practice effect was again not the same for every one of the tests. So for the digit span very strongly correlated. I made a lot of improvement, a lot of practice effect there versus (unclear 02:42) almost no practice effect. So kind of the big picture of what I’ve learned doing this is that working with data can be hard trying to get the different formats from the different same devices like the Zeo versus the Quantified Mind and getting the different programs working together can be a hassle. I lost some data because it had different formats and trying to merge them, just a lot of data didn’t quite line up perfectly. And also important to do a sanity check, I got lucky and only lost two hours instead of like two or three days because I did a simple graph and realized I aligned my data incorrectly. So some of the attractions of what I’m currently working on some experiments; fasting, so I measure my glucose and ketones while fasting. On the left we have the glucose scale and on the right we have the ketone scale, and as you can see the ketones are in blue pluses, and they show a very linier increase as the duration of the fast goes on. So I did two 48 hour fasts which are represented up here and that’s one errant glucose measurement was an hour and a half after I had a very huge meal prior to beginning my fast, otherwise my glucose is as it shows here, pretty much 90, and even after 24 hours it’s only gone down to 85, so it’s pretty steady However while I was doing that, in the process I discovered that my glucose meters aren’t that good, so comparing the measurements from these two different meters only came out with an RSquared of 0.46, which I would have expected – I would have hoped for a lot better agreement between the two meters. Also again taking measurements within 10 minutes, you know all the measurements within a 10 minute timespan, and on this meter I got a range of 70 to 107, which to me I mean it’s going to tell you if you just ate a huge bag of candy I suppose, but for any sort of precision analysis this is simply just not accurate enough. Okay, so what did I learn? Accuracy of the glucose meters is definitely in question and the ketone strips which I’m using and are the cheapest ones I could find, very expensive and have a high failure rate. So I had 10 ketone strips and only six of them I got usable information out of.

And in the future I have a lot of stuff going on, more quantified mind. I want to see if my scores will plateau, maybe compared with fasting and see if it changes 12 hours of fasting, 24, 48 and thanks for your attention.

About the presenter[edit | edit source]

Ariel Berwaldt gave this talk.