Longitudinal and Labeled Interval Data

Longitudinal vs Labeled Interval Data Collection

Longitudinal Data Collection: continuous data collected from participants over the course of weeks. This may include sporadic, individual points of ground truth on participant activities or situations.

Abstract examples of point ground truth:
{ time: <timestamp>, label: "at home", other-metadata: <metadata> }
{ time: <timestamp>, label: "happy", other-metadata: <metadata> }

Labeled Interval Data Collection: collection of precisely labeled intervals (e.g., start/end timestamps) of data corresponding to a specific event (e.g., activity or situation).

Abstract examples of interval ground truth:
{ starttime: <timestamp>, endtime: <timestamp>, label: "walking", other-metadata: <metadata> }
{ starttime: <timestamp>, endtime: <timestamp>, label: "laughing", other-metadata: <metadata> }
{ starttime: <timestamp>, endtime: <timestamp>, label: "eating", other-metadata: <metadata> }
{ starttime: <timestamp>, endtime: <timestamp>, label: "riding bus", other-metadata: <metadata> }

Question Title

* 1. Which type of data is more important for your work?

Ground Truth Labels

   We will collect ground truth labels from participants on a variety of activities, events, and situations. However, due to cost and time constraints we can only collect a limited number of labels. Your feedback is very important in helping us determine which labels will be most useful.
 Note that we will select only 20-30 labels for CrowdSignals.io and we will collect at least five examples of each from 1000+ participants. Each label will be captured simultaneously with 50+ types of sensor data from smartphone and/or smartwatch. This approach will produce over 100,000 labels and 5M labeled data stream segments!

   Note that we will also collect demographic and other survey responses from participants as well.

Question Title

* 2. Please rate the labels according to your interest in having feedback from participants

  Not Important Somewhat Important Important Very Important Extremely Important
Location places (e.g., home, work, gym, restaurant, indoors, outdoors)
Activity recognition (e.g., in vehicle, cycling, walking, running, tilting, unknown)
Smartphone on-body (e.g., smartphone with user and/or being manipulated)
Smartwatch on-body (e.g., smartwatch with user and/or being manipulated)
Smartphone context (e.g., on surface, in hand, face up/down, in pocket or bag)
Smartwatch context (e.g., on surface, on wrist)
Extended user ambulation (e.g. walking upstairs/downstairs, elevator, escalator)
User posture (e.g., sitting, standing, laying)
Extended user commute (e.g., on car, on bus, on train, on subway)
User driving (e.g., user actively driving, not a passenger)
Smartwatch exercises (e.g., sit-ups, crunches, squats, jumping, weights lifting, calisthenics)
Sports (e.g., basketball, soccer, football, baseball)
Activities of daily living (e.g., sweeping, scrubbing, grooming, vacuuming)
Entertainment (e.g., watching TV, listening to music)
Anomalies (e.g., activities that are deviations from user's routine)
Smartphone usage (e.g., using the phone, on a call)
Activity transitions (e.g., walking to running, sitting to standing, walking to driving, driving to walking)
Audio social context (e.g., silence, speech, conversation, in crowd, around kids)
Audio ambient context (e.g., on street, in vehicle, in container, music, water flow)
Audio places context (e.g., at cafe, at library, at pub, at gym, at office)
Audio voice context (e.g., male, female)
Audio keywords spotting (e.g., specific combinations such as “OK Google”, “Hey Cortana”)
Audio affective context (e.g., yelling, laughing, neutral, child crying, baby crying)
Audio sleep context (e.g., snoring, sleep talking)
Audio well-being context (e.g., coughing, sneezing)
Audio home security context (e.g., door opening, speech, drawer opening)
Audio stress level context (e.g., stressed, not stressed)
Social context (e.g., user alone, in a meeting, with family, at event socializing)
Smartphone gestures (e.g., shaking, double tap, pick-up, lift and look, tilt, ear touch)
Smartwatch gestures (e.g., wrist twist, shaking, single tap, double tap, tilt, pan, single flick, digits recognition)
Quality of sleep (e.g., excellent, good, poor, bad)
Smartphone user interaction (e.g., gaming, in a call, browsing, texting)
Mood (e.g., happy, sad, angry, bored)

Question Title

* 3. Please list any other labels you would like to see collected, their use cases and for what device (e.g., phone, watch)

Question Title

* 4. Please share any thoughts on how we could make CrowdSignals.io more useful for you.

Question Title

* 5. Optional Contact Information

T