Data Exploration With Fluxtream/BodyTrack
|Self researcher(s)||Anne Wright|
|Related topics||Social life and social media, Social interactions, Mood and emotion, Food tracking, Location tracking|
Builds on project(s)
|Show and Tell Talk Infobox|
|Event name||2014 QS Europe Conference|
|This content was automatically imported. See here how to improve it if any information is missing or out outdated.|
Data Exploration With Fluxtream/BodyTrack is a Show & Tell talk by Anne Wright that has been imported from the Quantified Self Show & Tell library.The talk was given on 2014/05/11 and is about Social life and social media, Social interactions, Mood and emotion, Food tracking, and Location tracking.
Description[edit | edit source]
A description of this project as introduced by Quantified Self follows:
Anne Wright talks about the value of aggregating data and how Fluxtream has been designed to allow people to explore combined data streams to support reflecting on their situation over time. She talks about the importance of supporting people in learning and adapting the culture and practices of self-tracking to investigate their own situation, including examples of people she's worked with have who have come to useful insights through such a process. She also talks about what data providers need to supply in order to support this sort of ongoing incremental reflection.
Video and transcript[edit | edit source]
Anne Wright Data Exploration With FluxtreamBodyTrack
Hello, my name is Anne Wright. I’d like to talk to you today about data aggregation, exploration with FluxtreamBodyTrack. I lead the body track project at Carnegie Mellon CREATE Lab and Fluxtream is an open source project. It’s a collaboration between BodyTrack at CMU and Candide Kemmler. And sort of the default tracking outline for QS seems to be, gather all the data, do something with it. I don’t know what it is but I’ll figure it out later. And then the third step is insight and change for the better. And this sort of reminds me of the underpants gnomes from South Park. Step one, gather underpants. Step three, profit. What’s step two? I don’t know but step three’s profit. And in the meantime you just accumulate a large pile of underpants. And in the QS realm this basically is you know data that is accumulating and becoming more and more daunting. What you really want to do I think is to do it in a more incremental way. So instead of sort of having a giant batch mode you have a cycle where you gather some of the data. You reflect on it. You adapt your ideas and strategies, and you iterate and you just keep iterating at a fairly fast flip. And what I think that we need in order to support that kind of use case is better data aggregation tools, explorable visualizations, and a culture of exploring data together. And the piece that Candide and I have been trying to work on this is called Fulxtream and you can go and use it now on fluxtream.org The idea is we connect all the data source that we can that are compatible with this sort of use case and try to get a good mix of physiological sensors, self-report and import and you can express what it is that you’re experiencing together with various sorts of context that allows you to have a powerful idea for various points in time in what was going on and to be able to reconstruct your narrative data so you can recall about that. The way it works is you make an account, you add a connector. You have a bunch of connectors to choose from and then you do an authentication dance to give Fluxtream the ability to get data on your behalf. It pulls in all of the data and you can see it in various ways. So in the calendar app you can see it either as a clock or a list, a map, timelines, or a grid of photos and you can see the photos compactly. Then there’s the BodyTrack app which is based around different channels of data, so each data source potentially has many different channels. In this case the top is self-report data from the main thing I struggle with is digestive systems in happy and unhappy. And the sort of mid layer there is happy and then you see these blips of unhappy. The starting point for me was what was going on at the time when things got unhappy, but what am I doing different at those times and what can I learn to know how to expect. So as I look at the really big blip on the right, I see that I had just got back from a trip. I arrived at the airport at midnight, I had a dinner with some friends that evening, when I look at what I ate that day it looks really weird. This is not typical, all sorts of things here that I never eat. So we have possible with travel or possibly with these items being an issue. It gives you ideas that you can then feedback and iterate into the next cycle. We’ve learned a lot of lessons going through this process trying to interface the data sources compatible with this use case. And things to do and not do when creating an API. The first is the data source has to support account binding. It’s not good enough to just be able to export your data to CSV and get it once. That’s fine for batch mode but it’s not so good for this sort of iterative use where you really want the data to be kept up to date, kind of like augmenting does. Another is unambiguous timestamps. You need to be able to take a point in time and unambiguously attach it to absolute time. You can either do this by providing you to see or local time with time zone, not everyone does it right and those you can’t line up properly. Another is support robust incremental sync, so you don’t have to get all the data every time, but you can find what has changed since the last time I asked. And this is really important especially for things that change backwards in time. There’s an API that we’ve been working on so you can also treat Fluxtream itself as a data source. That’s in beta, it’s not on the main server, it’s on a staging server, but please talk to us if you’re interested and you can engage more on this, and we’ve got a table over there for Fluxtream. And also I’ll be leading a best practice in QS’s APIs breakout session on Sunday, session six if you would like to engage more about the API issues.
About the presenter[edit | edit source]
Anne Wright gave this talk.