Six Year Visual Lifelogging
|Self researcher(s)||Cathal Gurrin|
|Related tools||Photos, phone sensors|
|Related topics||Social life and social media, Cognition, Lifelogging|
Builds on project(s)
|Show and Tell Talk Infobox|
|Event name||2012 QS Global Conference|
|This content was automatically imported. See here how to improve it if any information is missing or out outdated.|
Six Year Visual Lifelogging is a Show & Tell talk by Cathal Gurrin that has been imported from the Quantified Self Show & Tell library.The talk was given on 2013/10/10 and is about Social life and social media, and Cognition.
Description[edit | edit source]
A description of this project as introduced by Quantified Self follows:
In this talk, ultra-photologger, Cathal Gurrin moderates a QS Conversation about lifelogging. Using various wearable sensing devices to automatically record everything you see, hear, learn, and experience; creates a complete and accurate visual record of your life activities is a lifelog. Cathal talks about the the various interphases to digital memories that were used during the six years of vusual lifelogging.
Video and transcript[edit | edit source]
QS Conference 2012
Cathal Gurrin Six Year Visual Lifelogging
Here’s a visual on lifelogging. Talking about sleep I didn’t have much sleep last night with time zones so bear with me. So we’re interested in lifelogging and just to give you a brief summary of what we think lifelogging is about, it’s about continual sensing as much of your life activities that we can do using various wearable sensing technologies. And we are interested in synergy with your human memory and not substitution of and that is a very very important thing. So we’ve been doing this for a number of years, and we’ve been looking at various interphases to digital memories, so here’s one. Identify the color of everything that you see during the day, and plot it over a week or a month of time showing patterns. Also identifying the most important events you do every day and visualizing those with photographs. So this is nothing new really and it’s been spoken bout since the end of the war and various books about it; Gordon’s book, and various movies about it. and even Bill Gates was saying it in 1995. So we’re at the point of where we can now gather information about ourselves in a continual manner. Unlike what we are seeing so far which is sensing for a particular reason, and in this case we’re sensing for everything we can get and see what we can do with that. I’ve worn a sense cam for as I said about six years. We’ve recently moved to using mobile phone devices as our sensing technology because pretty much everyone has a smart phone in their pocket. And the key point’s is that we have got no manual input to the process, so users don’t have to do anything and we just have to analyze it automatically. And the photographs and the video looks something like this. They really capture life experience and are really meaningful for you when you look back at those photographs in the future. I can tell you about each of those photographs and what they mean to me and what I was doing at the time. We’ve also tested video, but we don’t do too much video because it requires a huge amount of disc space to store, [photographs don’t require that much, and there you can see me driving to work in Ireland on the other side of the road. But we don’t do that much video capture yet because that has some problems and we’ll see later on about the social implications of that briefly. So we’ve been mostly focusing as I said on pictures and analyzing pictures. And if we do that we gather picture through today we can do really cool things like this, okay analyze your day in a really sequential video but I quickly realized if we do this, if you gather 5000 pictures as samcan gathers you can’t use that data. It’s too much data; you have to have some kind of analysis software to understand this. We do semantic extraction; trying to understand what we’re doing, where we’re going, who we’re meeting. Even using computer technology to identify I ‘ve seen this picture before, I’ve seen this can of coke before, where in my past archive has this got to be. So moving on we’re now trying to identify semantics of what people do in their daily life, categorize your life into these 16 semantic concepts and index those or chart them automatically and give you feedback so you can start using these. And of course since this data it’s a huge amount of data will have to be hosted somewhere for you in the cloud typically and then there’s all these problems. Once you start putting your private data up in the cloud, I’m not even going to start on these problems, but there a huge amount of problems we have to solve; for example what happens after you die, one great interesting problem. So why did I do this? Well back on 8 June 2006, I decided I would wear a camera for two weeks. Gather two weeks of data because our research group are really focused on making search engines and this was we reckoned a really big search engine challenge. So back in 2008 I started doing this, which was for two weeks I’d wear a camera and record about 4,500- 5,000 a day, sometimes I now record video as well. Acceleration, light levels, location, interactions, many many different sources of data. And the really interesting thing is that it became really compelling; I didn’t want to stop. I was gathering my own archive. And we did it to understand the challenges; technical and social because if you wear a camera you get social challenges, and then to gather a large archive for myself and for our scientists at the university because nobody else wanted to. So right now six years later and this is my archive excluding video. And the key thing I’ve just hit 10 million photographs. So imagine 10 million photographs. You can’t do anything with that unless you have software okay. So software to analyze to understand, to organize, to sort is really really important. Then the habitual aspects it really becomes part of life putting on my watch and has micro-behavior alterations. And what I mean by that is if you watch me going to the bathroom you will see me changing my behavior. Instead of opening the door first I’ll switch off the camera first. You never leave home without it and it becomes a natural natural part of life. Rarely and this is to two of my friends and to back at home do people mind. Most people are interested in what this camera is about. Only twice has it caused trouble for me when people really really challenge me for wearing a camera; once was in an academic conference but that’s accepted. Questions I typically get is ‘is it on now’. So if you’re wearing a camera with flashing lights people say is it on. And there’s one of my friends where you for the firs ttime get that kind of expression of what’s that. Is it recording my voice? Hence video is a problem; people don’t like voice being recorded. Photographs okay, voice, that’s a big no. My concerns are unauthorized access to my data; who access it, how to build correct search engines. The confrontation or trouble we get into when we wear cameras outside, what happens after I’m dead, how long should I keep this data for, and we don’t have enough sensors yet to gather really really detailed archive. It also knows everything, so what happens if somebody breaks into my archive; it knows everything about me, essentially everything for six and a bit years, even what kind of toothpaste I use. So this system knows everything about me if this data ever gets accessed by somebody, so this is the problem and it captures life important moments. The first time you meet you partner, captured in the sense archive. Unfortunately life is live so it also captures the last time you met that person. so it captures everything about life, the important social aspects, the important parts of your life are captured, where it couldn’t otherwise take out a picture. So base d on all that experience we’re now building software called senseer, and this is about analyzing you for you in your daily life and tries to take away any human annotation aspect and do everything automatically. Senseer runs on smart phones and android because it’s easier to program let’s face it, and uses all available sensors in the phone. The phone is typically strung around the neck just like the sense cam is and runs all day and gathers every sense that we can get. Okay, hundreds of photographs, thousands every day. Uploads in real time if necessary and stored on a cloud base server. We have a number of use case or demo systems based around health care or locations, or social activities or even your actual normal visual life log of what you’ve been doing every day. So we’re currently working on this and the people who want to test it out please let me know and I’ll give you accounts to try it out. We’re basically trying to build as many analysis tools; small square little tools you can drag in, drag out of your dashboard that tells you about yourself. The one on the top right hand side being an example is my energy burns throughout the day, just like a Fitbit would do that for you. We’re putting in segmentation models to take the visual stream of photographs, break it all up into the 30 events that you do every day. Because typically on average we do 30 different things every day, choose one important picture and build your visual archive like that. We’ve been doing this stuff for a number of years, and we also want to build narrative generations, and we’ve done this as well which is take all the sensor streams and then use technology in the computer, a natural language processing technology generates small summaries; I got up, I went to work, I was with my friends etc. that kind of thing. And then we can do really nice visualization as well. you saw the big computer screens, the touchscreen of colors of life, very infographics, and the one I love is the best time to break into my home okay, so if you’re a thief and if you look at that graph you now when I’m not going to be home. And at Thursday at five O’Clock if you’re so way inclined. So why, why gather detail of life logs and lifelong activity? I’m not going to go through all of these; there’s any number of reason. We still don’t know enough reasons because we’re still building the use cases behind this. We started first off gathering the data first, building the search engines that will alert people to go and to actually gather this data.
So that’s an awful lot in seven and a half minutes, so thank you and if you want to contact me its firstname.lastname@example.org is the best way or Cathal on Twitter.
About the presenter[edit | edit source]
Cathal Gurrin gave this talk.