contact Philo at philo van kemenade [AT] gmail [DOT] com

Overview

Thanks to the proliferation of smart phones with high-end video capturing capabilities, more and more people share their experiences through video online. All this uploaded content is great, but also leads to an information overflow. There is a need for aggregation of content where people can get a quick overview of videos related to a certain topic. weMix makes it possible to display parts of different videos presented in a single video experience. This basic capability gives web developers the possibility to build video stories from distributed sources or quickly give an overview of current videos related to a topic.

weMix in 8 questions

1. What do you propose to do?

Create a javaScript plugin to Mozilla's popcorn.js library that makes it possible to aggregate parts of videos from different sources related to a single topic and present them in a single video experience.

2. Is anyone doing something like this now and how is your project different?

No direct plugins that let you define parts of videos to play continuously.

Flash might give some capability?

There are several automatic video editing applications (mobile and web):

http://www.magisto.com/ web app, smart summarization, selecting salient shots, "saving video from trash"

http://animoto.com/create web & mobile, automatic slideshow, template based, based on music track

http://www.muvee.com/en/ desktop app, auto combining photo & video

http://videolicious.com/ template-based automatic video compilation

3. Describe the networks with which you intend to build or work.

Mozilla's open source popcorn library for javaScript aims to make online video open en machine processable. weMix loves the capabilities that it brings to video online and likes to give the project an extra push by developing aggregative remixing functionality in form of a plugin to the library.

For a use case we intend to use a defined pool of content that lends itself well to the purpose of aggregative remixing of an overview. One such example is footage from public demonstrations as produced and collected by Vision On TV.

4. Why will it work?

Because there is a lot of video content out there and people are looking for quick ways to stay up to date. People will record and share more video as tools to do this become more accessible and easy to use thanks to smart phones. Online video is bound to 'open up' in the near future and tools to make this happen are being developed. For the project we'll use well-grounded technology like Mozilla's open source javaScript library 'popcorn.js' that makes online video interactive and computer-readable.

5. Who is working on it?

Philo researches how community-driven evolutionary algorithms could be used to improve video stories. He is also starting to learn javaScript and popcorn.

Yene and Ivan help with the javaScript development.

6. What part of the project have you already built?

So far weMix is really just starting out, so there is a lot of room for your creative ideas. Actual development starts soon!

7. What do you need to move your project forward?

  • Knowledge and experience of javaScript.
  • Getting in touch with people who'd like to use the functionality.

8. How will you sustain the project?

Keep testing and developing the plugin collaboratively with people that use it.

 

Get involved

weMix is still in an early conceptual stage, which means the project can take many directions. We would love to hear your ideas about video storytelling, interactive web-based video, development of solid JS plugins and any other topic you see as related. More specifically we would love to meet and talk to:

  • Video reporters
  • javaScript developers with an interest in online video
  • People who would like to get involved with Mozilla's popcorn.js or interactive video in general

If you have questions, suggestions or want to get involved, visit the project forum thread here, or contact Philo at philo van kemenade AT gmail DOT com

Launch event video presentation of "wePorter" by Philo:

At the Spring of Code Philo presents his idea of aggregating distributed video content about the same event to compile new, meaningful video stories by combining automated video editing and crowd sourcing techniques.

weMix

Tonight we had the SoC social meeting #7 at the Centre for Creative Collaboration. Even though it may not have sounded like it, I had the feeling there was finally something to update about. This was because we'd had a tech meetup that, although we didnt start any implementation yet, had some really good and important outcomes for the project (see post below). Besides the update tonight we had a brainstorm which was interesting and useful. What follows below is the gist from the minutes of tonight relating to weMix.

Update

Too much information on the net? One type of information that is not rendered too well online is video. Want to come up with code allowing developers to define pieces of online videos and compile them into one video experience. Update: had a tech meeting, great contributions. We didn't get into tech requirements, but more on where do we want this to go: aiming at the development of a plugin for Mozilla's Popcorn js library. Looked more at Popcorn, and it seems like the right tool. There's already a module within popcorn.js that does a part of what weMix wants to do. In the next tech meeting we want to take a look at what is already there and what we need to do to extend the current functionality.

Q Marc: can you play YouTube videos? A: Yes, already built into Popcorn, next step is to verify ability to play parts of those videos together.

Q Adrian: minimum requirements? A: website where you can select the pool of content, hit play and get the version of the video with timecode 4 to 9 from video A, etc. Videos coming from different sources.

Brainstorm

Recaping what the project is about. what we want to do and what's possible and how can we expand this? One important extension: Make input more dynamic so content can be loaded dynamically (loading content like search query through youtube). Plugins on top of that to dynamicly load top ranked youtube videos for query, or most recent tweeted videos on twitter with hashtag '#X'.

A proposal by Michael that might be interesting: when combining videos to have the possibility to watch them with a different audio soundtrack.

On Monday 14 May Neil from Portfolio Fusion (PF) and I teamed up to organise our first tech meeting. The idea for the meetup developed in the brainstorm session at the last social event where Neil, Yene, Muhammad and Ivan discussed the next steps for PF. Everyone was present this time as well as Mihai and Marc. Im very excited that so many people come along and showed genuine interest to help the project move forward. Thanks everyone for joining in and helping out! Here is a quick recap of what happened on the evening.

I announced my intentions for the night as:

  1. Recap & update the project
  2. Define areas of action
  3. Define scopes of those areas
  4. Define next actions to reach the intended scope per area

Below a brief report for each of these points as far as they got covered.

 

Recap & update the project

From the mentor meeting I had with Mihai, there emerged an adaptation of the project goals. Instead of aiming at an end-user application or webservice, we'd like to strive for a package of reusable code (by web developers), still for the goal of displaying parts of videos from a pool of content within one video experience.

Define areas of action

  • Website
  • Applicaiton code / plugin
  • Testing / Quality Assurance
  • Design
  • Promotion

Define scopes of those areas

Website

A showcase of the developed js code. Potentially wemix.weporter.org.

Minimal: When I navigate to the website a video player plays a X-minute video compiled from Y-second parts of a set pool of content related to a single topic.

Ideal: When I navigate to wemix.org I see a search box and a button, plus a video player underneath. When I click on play on the videoplayer i see a video showing me how to use weMix. When I type something in the search box and click the button, the video player starts playing a X-minute video compiled from Y-second parts of the N highest ranked YouTube videos related to what I searched.

Applicaiton code / plugin

Minimal: Reusable js code that works for use case on website

Ideal: A well-contained wemix.js file up to standards of js plugin coding (potentially integrable by Mozilla as extension to popcorn.js). 

 

Other areas will be defined later.

Define next actions to reach the intended scope per area

Philo: play with popcorn.js, improve javascript, research popcorn plugins to use, plan next steps for project, plan next tech meeting (with Mihai).

Yene, Ivan: play with popcorn.js

Tomorrow at London Hackspace the two projects weMix and Portfolio Fusion meet for a tech meetup.

 

At this first tech event for the two projects, focus will be mainly on planning, making sure we have a roadmap to meaningful project milestones.
 
Please see the event page for more info.

Programme:

1. Intro to projects

- setting agenda 
- goals for the night 

2. Go-around

- What's your background/skills?
- What have you done already?
- What would you like to do/learn/create?

3. Portfolio Fusion 30 min

- define areas
- define next actions

4. weMix 30 min

- define areas
- define next actions

5. Pizza & drinks

6. Work on next actions

"Story telling is a social thing - why should video editing be any different? With WeVideo, you enable your surroundings to join your story telling by inviting them into your video projects. You can co-create or make your own unique version of videos from clips shared by you and your friends."

No, this is not another name change of Spring of Code's weMix project, this is WeVideo, a cloud service that lets you collaborate on video editng by using their editing tools all over the network. At first sight their video editing tool looks much like iMovie interface with its simplified approach to structuring a video story. In my personal opinion you they could have simplified more, but I understand that people using the app will want "Holywood-grade" video transitions and "professional" effects even though these toys are never really used by any hollywood production. The most interesting feature is the option to collaborate, which turns a video editing project into something like a google doc. In fact the service just teamed up with Google and is now offered as part of the freshly released cloud service Google Drive.

WeVideo recognizes the social aspect of storytelling and its app offers people the possibility to shape stories together over time. From my weMix perspective I'm wondering about several aspects:

  • WeVideo seems to be a rather personal service in the way it lets you wok on your own video content or that of your friends and collegues. How is the integration of existing online video?
  • I see the role of software in creative techonologies as organised in a three-step hierarchy depending on the amount of knowledge a system has to offer: Simplification -> Guidance -> Automation. WeVideo functions currently straddles the first to approaches. They are inherently related because simplification can always be interpreted as a form of guidance. How could WeVideo be informed by automated approaches such as Magisto and Videolicious?
  • Related to my first point, Im wondering how video storytelling could go from collaboration to wikification, opening projects publically to the entire internet.

I hope to play around with the video editor soon to do a comparison of editing the same video on several editors (Final Cut, iMovie, WeVideo, Magisto). For now it looks like WeVideo is a great step in the direction of bringing video storytelling to the web. All well and greatly exciting, the only thing I have real doubts about is the name...

 

Mentor Meeting with Mihai

 

On Monday 16 April I had a first fruitful mentor meeting with Mihai on the near future development of wePorter. We mainly discussed procedural sides of the project and established a clearer idea of what the project is aimed at and what it needs to make this happens. Figuring all this out was insightful and proved to have vast consequences for the way the project is structured.

 

For both of us the meeting was also a lesson, or rather training into organising a successful meeting. There's never enough time and always too much to discuss. By maintaining a strong structure to the discussions we were able to keep things focussed as well as documented. A simple way to add to the structure was to file new thoughts and issues not yet on the agenda that popped up somewhere along the talk, in the section POP-UPs and briefly brush over them as the final point on the agenda.

 

The following things made it to the agenda:

 

Vision for wePorter

A broad project roadmap

 

1. In 4-8 weeks: build, perhaps with others, tools and technologies that enable online video remixing, combining multiple online sources into a single video experience. This part is really about the development of new open-source tools with a general applicability. We coin the name 'weMix'

 

2. After 4-8 weeks: Focus on researching video story evolution for thesis (no web tools development)

 

3. After that (~September): combine weMix and developed ideas on video story evolution on thesis to apply the general tools to a specific mean: event-related citizen video journalism. This is what I've been talking of in this project at SoC so far: wePorter. This development could take many forms, so far I can see wePorter being a more specific, consumer-oriented application that uses the general weMix tools. Such a project can include the main further contribution to open-source weMix.

 

Milestones:

 

  • meeting someone who is interested and wants to contribute:
    • js developer
    • entrepreneur for advice on wePorter
  • have player showing clips from different sources as single video (locally)
  • have player available on public website
  • Minimal version of system:
    • search 'kittens' then see one player showing video V0 with randomly selected kitten clips from multiple video sources.
  • Champagne version of system:
    • ~ minimal version + fixed length video stories + informed video story grammar that determines what content is preferred when + dual view tree structure interaction.

 

 

Theory of Constraints

 

Constraints I'm currently facing on the way to my next milestones:

  • knowledge
    • linking domain & server space
    • HTML 5
    • javascript
    • popcorn,js

 

Dealing with constraints: tutorials & talking to friends

 

Getting Organised and Establishing Flow

 

Things to consider:

  • change project name wePorter => weMix
  • explore publicly accessible TODO list
  • Q: what period will I work on weMix?
    • A: mainly the next two months, after that focus on dissertation on story evolution
  • Q: When do I want to meet mentor Mihai and what do I want from these meetings?
    • A: More or lesson two-weekly, so about 4 times in total. The meetings mainly give me procedural advice in terms of project management, contextualisation of the project, scheduling and planning, vision of the project, etc.

 

Next Meeting

  • discuss 8 questions
  • reaching out
  • how to build community of users on website
  • mission: opening up online video
  • the hard problem of slicing up current online video content. Two options:
    • propose new standard of uncompiled open video data format (the hard one, but most desirable in the end)
    • using user interaction to get time coded semantic knowledge about video. Curation through interaction.

Spring of Code #2 Update

Just had the second meetup and feel really excited about all the creative input! Im also reminded of the importance of documenting. If fresh knowledge stays floating and isn't written down, it will be lost.

After the lightning pitch a lot of useful questions came up. Richard raised the issue of authorship. What licences allows for remixing video content and posting anew? How can authors of several video clips that are compiled togheter be accredited?

Another question concerned hosting. Double (redundant) hosting should be prevented, but hosting content ourselves would enable more direct video editing control. This brings about the idea of editing by reference. Can a part of a video hosted elsewhere be played by 'in - out time code reference'? Next, can we construct a larger video story out of several of these parts?

I also had a great brainstorm with Jakob, mainly about user interface design. We wondered how a user's viewing of a video story could reveal useful information to improve that very story. Our conclusion was that 'viewing' should become more like 'exploring'. These were ideas that have been on my mind all along, but it was good to express them, get feedback and work on some creative adaptions and additions. An important part in this was the approach of using statistical modelling to come up with sequences of likely video stories and let users navigate through this story space.

Also a nice first try our of current technology would be to combine multiple related videos into a single playlist. Wait a minute, that sounds exactly like the project proposal from the guys of visionOntv!