questions arround deptford regeneration & the boat community

what will happen with the creek?

will the boat have to leave?

how democratic are decision being made with the regeneration, how are people involved in the process?

it will take more then ten years to regenerate the area.
who is planning the regeneration process?

start documenting the process (who did what?)
feeding everything into one archive.

a docfilm club on the boat?

is it possible to tell good stories over collective filmmaking?

the boat session & remix conference

The idea is to host a small event on the Mine Sweeper for  March 25th. It should be hight tide (need to double check) around 8pm on Sat 25th March and the basic proposal is to set up a wireless internet link to the internet so that we can broadcast as an Deptford.TV. . This could then go to a community channel for a live vj session on the cable network.

The talks during the day would be regarding the regeneration of the Creek, the impact of the Laban Centre and architecture on the area and the impact on local communities – such as the boat community, scrap yards etc. We will invite architects, boat people, psychologists, local characters / artists and in general an interesting cross section to sit on the boat and watch the “official” art of regeneration engagement taking place at the Laban.

The event would be documented by Deptford.TV. The way it is documented both sound and video wise is yet to be decided – but may be along the lines of “ambient media” or sound and video textures – rather than explicit narrative. This material will be presented live, and then an editied version later used as content within what we are calling (for lack of a better word) a magazine / new form of publication (internet and print-on-demand based) film-database.

The central concept of the talk is to be thrashed out – but essentially will discuss the conflict between top-down and bottom-up structures in the context of what is happening in front of the people gathered. It is a commentary.

Practical Issues
1.A wireless internet connection from the Minesweeper to the Laban or via the Laban to Deckspace in Greenwhich. Could link to Gray World.
2.Inviting key speakers
3.Timing of live Deptford.TV broadcast – the vj session
4.Any issues regarding “critisism” of other parties invovled?
5.Budget so do we engage in “Trade in Kind” or even explicit Time Trade. We can only motivate people with equipment and time if they are getting enough out of project – mooring negotiation is key for motivation.

perspective from the boating POV

parallel worlds boating community enclosed on themselves
architects and culture institutions inward facing

Narrative
The narrative collects a network of personal perspectives of the regeneration process taking place in the area of the Laban Centre and the neighboring water way and boating community on Deptford Creek. There is a fundamental structure of two parallel narratives taking place withing the same physical place, but rarely touching. A few individuals (promiscuous links in small world terminology) move between these narratives, resulting in an oddly intertwined clash of cultures, with classically one ousting the others.

The Mine Sweeper is a 150ft teak sea faring boat, built in the 1950’s to clear mines from the Suez canal. 12 were built. It is presently moored on the river bank facing the Laban centre, where it will have to move (in the near future), due to the development of the river bank by property developers as part of an ongoing regeneration process in the Deptford Creek area.

The London boat community is an eccentric space, populated by ex-squaters, enthusiasts and a wide variety of characters from diverse backgrounds. The narrative views the impact of the arrival of the Laban centre, and the forthcoming redevelopment of the area from the perspective of the boat community and the associated light businesses that will have to relocate as part of the process.

The river community in many ways represents the last bastion of free mobile cowboy-space in London. Most of the the historical dockyards in london have already been developed – squating is more or less a thing of the past.

Architectural Presence
The building of the Laban Centre is the first step in the redevelopment along the waterfront, and as such sets a permitted standard for future development.

Facade is designed in colaboration with an artist.

Building arguably fails to interact with its environment – sky and surrounding structures reflect in its facade, and a possible reaction to the existing neighbouring warehouses is the shed-like appearance and structure, whilst the moved earth that has been landscaped into auditorium hills surrounding the dance centre are reminiscent of the actual landing.

The clean detailing of the facade glass panels against the plastic coloured at the moment they touch ground is noted. One way access in/onto the site, mooring originally planned has not yet been realised (why?) and waterfront side of the site instead been decorated with a fence concisting of irregular heights in vertical stainless steel poles/tubes. the building adds slight colours reflecting on the creeks surface. Thats it and there is extension …….

The public presentation of the building development was in 2000/01.

Failed boat funding
group involved inexperienced in boats as well as engineering works. Taking out the boat to a yard structure, the project ran into unknown depth of finances due to underestimation of the time and crafts inolved to repair a wooden boat.

Deptford.TV on Channel 4’s fourdoc blog

Deptford TVWe like to give space to interesting documentary showings and opportunities around the country and world. But at the moment, there are far too many exciting doc screenings and events coming up to give them the space they deserve, so here’s a quick whizz through them, and apologies for being too brief:

Deptford.tv are hosting 3 two-day long hands on database filmmaking workshops, in which they hope to establish “a public database of documentary film and video to help annotate the regeneration processes in Deptford and across South East London.” They’ll be reviewing existing documentary material of the area, and discussing the creating of new work. We think this is really important work – historic areas often need regenerating, but we shouldn’t neglect to capture in docs how things once were. The free sessions are on the weekends of 3rd/4th, 10th/11th. 17th/18th March, with a walk around Deptford on the 24th and a conference on the 25th. For more information, email info@deptford.tv

locative media

with the deptford.tv project we establish a database of roughmaterial and edited clips which carry embeded metadata. the fact that we use an open content license makes it possible that the material can be shared between the participants.

such a database offers searches on existing footage into themes, dates, authors, description etc.

and regeneration shouldn’t really mean gentrification – deptford greek just lies on the border between greenwhich and lewisham – urban, industrial setting – now there are plans of changing the face of deptford and to bring this two boroughs together. many issues arise about which we would like to tell & discuss at deptford.tv – in a way locative media…

thoughts on collaborative film-making

the main area of interest is the research of new methods of filmmaking and
where they can be applied, looking at (and refering to) the old utopia of
the video movement (60ties / 70ties) and further back brechts many-to-many
thesis of radio distribution and the vertoff theories on film: reaching
access to culture and knowledge through media for as many people as
possible.

in the case of deptford.tv this is done with the use of digital networks
under open licences. the thesis is that the use of floss and open content
will enable these "utopian" forms of communication to a certain degree (as
in collective approaches to media production).

.

Urban Renaissance

from: http://www2.lewisham.gov.uk/lbl/UrbanRenaissance/

Urban Renaissance in Lewisham

roundabout

Lewisham is a key location in South East London with existing public transport links to the city, Docklands and the whole Thames Gateway area. It has the potential to become one of the most exciting, dynamic and prosperous places in London to live and work.

Urban Renaissance in Lewisham (URL) is working on a comprehensive town centre scheme to raise the profile of Lewisham, create commercial confidence and enhance the potential of residents.

Almost £16 million of Single Regeneration Budget (SRB) and further private and public sector investment has been received to fund the programme. The redevelopment will:

  • create an efficient public transport interchange and new urban environment
  • provide new opportunities for existing and new residents in Lewisham Town Centre
  • promote business success
  • enhance open spaces

Database Filmmaking a la Lev Manovich

from http://www.manovich.net/softcinemadomain

Soft Cinema project mines the creative possibilities at the intersection of software culture, cinema, and architecture. Its manifestations include films, dynamic visualizations, computer-driven installations, architectural designs, print catalogs, and DVDs. In parallel, the project investigates how the new representational techniques of soft(ware) cinema can be deployed to address the new dimensions of our time, such as the rise of mega-cities, the “new” Europe, and the effects of information technologies on subjectivity.

At the heart of the project is custom software and media databases. The software edits movies in real time by choosing the elements from the database using the systems of rules defined by the authors.

SOFT CINEMA explores 4 ideas:

1. “Algorithmic Cinema.”
Using a script and a system of rules defined by the authors, the software controls the screen layout, the number of windows and their content. The authors can choose to exercise minimal control leaving most choices to the software; alternatively they can specify exactly what the viewer will see in a particular moment in time. Regardless, since the actual editing is performed in real time by the program, the movies can run infinitely without ever exactly repeating the same edits.
2. “Macro-cinema.” If a computer user employs windows of different proportions and sizes, why not adopt the similar aesthetics for cinema?
3. “Multimedia cinema.” In Soft Cinema, video is used as only one type of representation among others: 2D animation, motion graphics, 3D scenes, diagrams, maps, etc.
4. “Database Cinema.” The media elements are selected from a large database to construct a potentially unlimited number of different narrative films, or different versions of the same film. We also approach database as a new representational form in its own right. Accordingly, we investigate different ways to visualise Soft Cinema databases.

notes from videoworkshop at hacklab (&wikipedia), 4. february 2006

1. streaming

streaming over the internet is limited by the broadband connection of the client.

1.1. streaming protocolls

example protocolls for streaming:
RTSP – Real Time Streaming Protocoll (for the internet – you need to send the information to every client over TCP/IP)
MMS – microsoft media streaming
RTP – Real Time Protocoll for the local network (you only need to send it out once)

1.2. codecs

.avi
AVI, an acronym for Audio Video Interleave, is a multimedia container format introduced by Microsoft in November 1992, as part of the Video for Windows technology. AVI files contain both audio and video data in a standard container that allows simultaneous playback. Most AVI files also use the file format extensions developed by the Matrox OpenDML group in February 1996. These files are supported by Microsoft, and are known unofficially as “AVI 2.0”.

It is a special case of the Resource Interchange File Format (RIFF), which divides the file’s data up into data blocks called “chunks”. Each “chunk” is identified by a FourCC tag. An AVI file takes the form of a single chunk in an RIFF formatted file, which is then subdivided into two mandatory “chunks” and one optional “chunk”.

The first sub-chunk is identified by the “hdrl” tag. This chunk is the file header and contains metadata about the video such as the width, height and the number of frames. The second sub-chunk is identified by the “movi” tag. This chunk contains the actual audio/visual data that makes up the AVI movie. The third optional sub-chunk is identified by the “idx1” tag and indexes the location of the data chunks within the file.

By way of the RIFF format, the audio/visual data contained in the “movi” chunk can be encoded or decoded by a software module called a codec. The codec translates between raw data and the data format inside the chunk. An AVI file may therefore carry audio/visual data inside the chunks in almost any compression scheme, including: Full Frames (Uncompressed), Intel Real Time Video, Indeo, Cinepak, Motion JPEG, Editable MPEG, VDOWave, ClearVideo / RealVideo, QPEG, MPEG-4, XviD, DivX and others.

.mov the header is in the beginning (so you can already start)
A QuickTime file (*.mov) functions as a multimedia container file that contains one or more tracks, each of which store a particular type of data, such as audio, video, effects, or text (for subtitles, for example). Each track in turn contains track media, either the digitally encoded media stream (using a specific codec such as Cinepak, Sorenson codec, MP3, JPEG, DivX, or PNG) or a data reference to the media stored in another file or elsewhere on a network. It also has an “edit list” that indicates what parts of the media to use.

Internally, QuickTime files maintain this format as a tree-structure of “atoms”, each of which uses a 4-byte OSType identifier to determine its structure. An atom can be a parent to other atoms or it can contain data, but it cannot do both.

Apple’s plans for HyperCard 3.0 illustrate the versatility of QuickTime’s file format. The designers of Hypercard 3.0 originally intended to store an entire HyperCard stack (similar in structure to a complete web site, with graphics, buttons and scripts) as a QuickTime file.

The ability to contain abstract data references for the media data, and the separation of the media data from the media offsets and the track edit lists means that QuickTime is particularly suited for editing, as it is capable of importing and editing in place (without data copying) other formats such as AIFF, DV, MP3, MPEG-1, and AVI. Other later-developed media container formats such as Microsoft’s Advanced Streaming Format or the open source Ogg and Matroska containers lack this abstraction, and require all media data to be rewritten after editing.

.mp4
.h264
.x264

H.264, or MPEG-4 Part 10, is a digital video codec standard which is noted for achieving very high data compression. It was written by the ITU-T Video Coding Experts Group (VCEG) together with the ISO/IEC Moving Picture Experts Group (MPEG) as the product of a collective partnership effort known as the Joint Video Team (JVT). The ITU-T H.264 standard and the ISO/IEC MPEG-4 Part 10 standard (formally, ISO/IEC 14496-10) are technically identical, and the technology is also known as AVC, for Advanced Video Coding. The final drafting work on the first version of the standard was completed in May of 2003.

H.264 is a name related to the ITU-T line of H.26x video standards, while AVC relates to the ISO/IEC MPEG side of the partnership project that completed the work on the standard, after earlier development done in the ITU-T as a project called H.26L. It is usual to call the standard as H.264/AVC (or AVC/H.264 or H.264/MPEG-4 AVC or MPEG-4/H.264 AVC) to emphasize the common heritage. The name H.26L, harkening back to its ITU-T history, is far less common, but still used. Occasionally, it has also been referred to as “the JVT codec”, in reference to the JVT organization that developed it. (Such partnership and multiple naming is not unprecedented, as the video codec standard known as MPEG-2 also arose from a partnership between MPEG and the ITU-T, and MPEG-2 video is also known in the ITU-T community as H.262.)

The intent of H.264/AVC project has been to create a standard that would be capable of providing good video quality at bit rates that are substantially lower (e.g., half or less) than what previous standards would need (e.g., relative to MPEG-2, H.263, or MPEG-4 Part 2), and to do so without so much of an increase in complexity as to make the design impractical (excessively expensive) to implement. An additional goal was to do this in a flexible way that would allow the standard to be applied to a very wide variety of applications (e.g., for both low and high bit rates, and low and high resolution video) and to work well on a very wide variety of networks and systems (e.g., for broadcast, DVD storage, RTP/IP packet networks, and ITU-T multimedia telephony systems).

The JVT recently completed the development of some extensions to the original standard that are known as the Fidelity Range Extensions (FRExt). These extensions support higher-fidelity video coding by supporting increased sample accuracy (including 10-bit and 12-bit coding) and higher-resolution color information (including sampling structures known as YUV 4:2:2 and YUV 4:4:4). Several other features are also included in the Fidelity Range Extensions project (such as adaptive switching between 4×4 and 8×8 integer transforms, encoder-specified perceptual-based quantization weighting matrices, efficient inter-picture lossless coding, support of additional color spaces, and a residual color transform). The design work on the Fidelity Range Extensions was completed in July of 2004, and the drafting was finished in September of 2004.

Since the completion of the original version of the standard in May of 2003, the JVT has also completed two generations of “corrigendum” errata corrections to the text of the standard.

x264 is a GPL-licensed H.264 encoder that is used in the free VideoLAN and MEncoder transcoding applications and, as of December 2005, remains the only reasonably complete open source and free software implementation of the standard, with support for Main Profile and High Profile except interlaced video. [1] A Video for Windows frontend is also available, but has compatibility problems, as Video for Windows can’t support certain features of the AVC standard correctly. x264 is not likely to be incorporated into commercial products because of its license and patent issues surrounding the standard itself. x264 won an independent video codec comparison organized by Doom9.org in December 2005

x264 is a free library for encoding H.264/MPEG-4 AVC video streams. The code is written by Laurent Aimar, Loren Merritt, Eric Petit (OS X), Min Chen (vfw/nasm), Justin Clay (vfw), Måns Rullgård, Alex Izvorski, Alex Wright, and Christian Heine from scratch. It is released under the terms of the GNU General Public License, but this license may be incompatible with the MPEG-LA patent licenses in jurisdictions that recognize software patents.

As of December 2005, it is one of the most advanced publicly available AVC encoders. It is also one of only a few publicly available High Profile AVC encoders. It supports:

* Context-based Adaptive Binary Arithmetic Coding (CABAC) and Context-based Adaptive Variable Length Coding (CAVLC)
* Multiple reference frames
* All intra-predicted macroblock types (16×16, 8×8 and 4×4 — 8×8 is part of AVC High Profile)
* All P-frame inter-predicted macroblock types
* B-Inter-predicted macroblock types from 16×16 down to 8×8
* Rate Distortion Optimization
* Multiple ratecontrol modes: constant quantizer, constant quality, single or multipass ABR with the option of VBV
* Scene cut detection
* Adaptive B-frame placement, with the option of keeping B-frames as references / arbitrary frame order
* 8×8 and 4×4 adaptive spatial transform (High Profile)
* Lossless mode (High Profile)
* Custom quantization matrices (High Profile)

x264 is available as a Video for Windows codec and in a command line interface. The command line version is always up-to-date, whereas the Video For Windows version may sometimes be lacking extra features, and currently requires hacks to handle B frames. Several graphical user interfaces have been made for the command line version, including MeGUI, AutoAC and a .NET (1.1) based x264CLI GUI.

x264 has a strong user community based at Doom9 where discussion for improved development takes place.

.mpeg2 video is a video ready for streaming – it is separated into different pieces where each part has a header and a number – any programm that accesses the mpeg can access it at any place and just need to renumber the packages with starting number one again.

MPEG-2 (1994) is the designation for a group of coding standards for digital audio and video, agreed upon by MPEG (Moving Pictures Experts Group), and published as the ISO/IEC 13818 international standard. MPEG-2 is typically used to encode audio and video for broadcast signals, including direct broadcast satellite and Cable TV. MPEG-2, with some modifications, is also the coding format used by standard commercial DVD movies.

MPEG-2 includes a Systems part (part 1) that defines Transport Streams, which are designed to carry digital video and audio over somewhat-unreliable media, and are used in broadcast applications.

The Video part (part 2) of MPEG-2 is similar to MPEG-1, but also provides support for interlaced video (the format used by broadcast TV systems). MPEG-2 video is not optimized for low bit-rates (less than 1 Mbit/s), but outperforms MPEG-1 at 3 Mbit/s and above. All standards-conforming MPEG-2 Video decoders are fully capable of playing back MPEG-1 Video streams.

with avidemux you can convert all the codec – and has a basic editing function
you can change the container to .avi or to mpeg (two kinds of mpeg with two different header – the header is ready for streaming through internet or to the dvd) – elementary streams: one is video, the other audio (can have several audios) or in satellite tv you can have right & left channel different language version…

MPEG-2 (1994) is the designation for a group of coding standards for digital audio and video, agreed upon by MPEG (Moving Pictures Experts Group), and published as the ISO/IEC 13818 international standard. MPEG-2 is typically used to encode audio and video for broadcast signals, including direct broadcast satellite and Cable TV. MPEG-2, with some modifications, is also the coding format used by standard commercial DVD movies.

MPEG-2 includes a Systems part (part 1) that defines Transport Streams, which are designed to carry digital video and audio over somewhat-unreliable media, and are used in broadcast applications.

The Video part (part 2) of MPEG-2 is similar to MPEG-1, but also provides support for interlaced video (the format used by broadcast TV systems). MPEG-2 video is not optimized for low bit-rates (less than 1 Mbit/s), but outperforms MPEG-1 at 3 Mbit/s and above. All standards-conforming MPEG-2 Video decoders are fully capable of playing back MPEG-1 Video streams.

Some people think of Avidemux as VirtualDub (or VirtualDubMod) for Linux, and it is generally reputed by many Linux and Windows users alike to be the closest thing available. While it cannot do everything that VirtualDub can, Avidemux can do many things that its Windows counterpart cannot. In the latest version, Avidemux boasts such features as native support for OGM and MP4 files natively, direct read input for various types of MPEGs, and many other video formats and containers. It has built-in subtitle handling features. It offers MPEG editing and requanization. Avidemux primarily uses its GUI to perform tasks. This means that it is capable of doing many things that non-GUI users would otherwise have to do using command line tools such as MEncoder or Transcode.

1.3. metadata

Metadata (Greek: meta- + Latin: data “information”), literally “data about data”, is information that describes another set of data. A common example is a library catalog card, which contains data about the contents and location of a book: It is data about the data in the book referred to by the card. Other common contents of metadata include the source or author of the described dataset, how it should be accessed, and its limitations. Another important type of data about data is the link or relationship between data. Some metadata schemes attempt to embrace this concept, such as the Dublin Core element link.

Since metadata is also data, it is possible to have metadata of data–“meta-metadata.” Machine-generated meta-metadata, such as the reversed index created by a free-text search engine, is generally not considered metadata, though.

Metadata that is embedded with content is called embedded metadata. A data repository typically stores the metadata detached from the data.

Metadata has become important on the World Wide Web because of the need to find useful information from the mass of information available. Manually-created metadata adds value because it ensures consistency. If one webpage about a topic contains a word or phrase, then all webpages about that topic should contain that same word. It also ensures variety, so that if one topic has two names, each of these names will be used. For example, an article about Sports Utility Vehicles would also be given the metadata keywords ‘4 wheel drives’, ‘4WDs’ and ‘four wheel drives’, as this is how they are known in some countries.

Adding Metadata to Video Files

Metadata – that is extra data about your video – is a good idea to add in your work. As video search engines and other indexation tools become more widespread they can read this data and as a result categorize your videos better. Typical metadata include the author’s name (that’s your name), publication date, copyright statement etc. This article currently contains guides to adding metadata in two editing programs, but more will follow.

1. Adding Metadata in Quicktime Pro
2. Adding Metadata in Adobe Premiere Pro 1.5

Using Quicktime Pro

Quicktime Pro is a lightweight, but powerful tool when it comes to working with Quicktime movies. Quicktime has support for a wide range of metadata, but the interface for adding the data can be a bit cumbersome if you’re adding a lot of different types of data.

The Quicktime metadata is located in the Movie Properties window. You can reach this from the Movie menu under Get Movie Properties. The shortcut key in Windows is Ctrl+J, if you use Quicktime Pro much, you will end up using the Movie Properties window often.

Movie Menu

Movie Properties WindowOnce you bring out the Movie Properties it should be set to show the metadata present already (your digital camera probably added some). Metadata in Quicktime is called Annotations. You can always reach your annotations by choosing the Movie option in the left dropdown menu, and Annotations in the right dropdown menu.

When showing annotations the Movie Properties window is divided into two parts. The top parts show the types of metadata already present (“Original Format” and “Information” in the illustration). If you click one of these the data saved will be shown in the bottom half of the window (“SANYO DIGITAL CAMERA C4” in our sample). You can edit data by clicking the Edit button at the bottom.

Add Annotation WindowNow for the really important bit: Adding metadata. You do this by clicking the Add button at the bottom left of the Movie Properties window. That will bring up the Add Annotation window. In the top half of the window you pick which type of data you’re adding, and in the bottom half you type what you want to save. In our sample we’ve marked the Copyright metadata, and added the copyright statement “Copyright 2005. All Rights Reserved.” When you’re done click Add. One important thing is that you have to add metadata one type at the time.

After you’ve added the relevant information you have to save your movie. Go to the File menu and pick Save.
Adding Metadata in Adobe Premiere Pro 1.5

Once you have finished editing your video and are ready to export it, follow these steps to add meta data such as Author, Title, Copyright Info, Description, Keywords, etc.

1. Go to File > Export > Adobe Media Encoder

2. The window that opens is called Transcode Settings. On the left side of this window is a link to Metadata with a drill-down arrow next to it. You can either enter the basic metadata, or drill down to add/remove metadata fields.

Once you click Add/remove fields, you are presented with this window full of wonderful metadata options.

3. Now your video is all loaded up with search engine-friendly metadata goodness and ready to encode using your favorite flavor of compression!
Comments

1.4. applications

you can use application like VLC or MPLAYER to send files

per example with vlc you can recieve the signal from any device, from camera, digital video broadcasting, free-view, satellite tv, and you can send it on to your harddisk.

another example would be using the mplayer as reciever and sender, any video you can open in mplayer can be send to the output (there are two kinds of codecs, the codec for opening (watching, decoding) and for sending (distributing, encoding).

1.5. signals

to see the video on the screen it needs to arrive in the rgb (red, green, blue) signal – which is done over the standart vga port of the your computer. The RGB color model utilizes the additive model in which red, green, and blue light are combined in various ways to create other colors. The very idea for the model itself and the abbreviation “RGB” come from the three primary colors in additive light models.

Note that the RGB color model itself does not define what exactly is meant by “red”, “green” and “blue”, so that the same RGB values can describe noticeably different colors on different devices employing this color model. While they share a common color model, their actual color spaces can vary considerably.

a normal tv set recieves signals in yuv. if you want to recieve on a tv you need to convert them.

The YUV model defines a color space in terms of one luminance and two chrominance components. YUV is used in the PAL and NTSC systems of television broadcasting, which is the standard in much of the world.

YUV models human perception of color more closely than the standard RGB model used in computer graphics hardware, but not as closely as the HSV color space.

Y stands for the luminance component (the brightness) and U and V are the chrominance (color) components. The YCbCr or YPbPr color space, used in component video, is derived from it (Cb/Pb and Cr/Pr are simply scaled versions of U and V), and is sometimes inaccurately called “YUV”. The YIQ color space used in the NTSC television broadcasting system is related to it, although in a more complex way.

YUV signals are created from an original RGB (red, green and blue) source. The weighted values of R, G and B are added together to produce a single Y signal, representing the overall brightness, or luminance, of that spot. The U signal is then created by subtracting the Y from the blue signal of the original RGB, and then scaling; and V by subtracting the Y from the red, and then scaling by a different factor. This can be accomplished easily with analog circuitry.

The following equations can be used to derive Y, U and V from R, G and B:
Y = 0.299R + 0.587G + 0.114B
U = 0.492(B − Y)
= − 0.147R − 0.289G + 0.436B
V = 0.877(R − Y)
= 0.615R − 0.515G − 0.100 B

you can put the files into other container like avi, quicktime

2. editing

in mpeg you have to edit from key frame to key frame because it is in GOP (group of pictures) that means there is an I – P – B (I = key frame) p stands for predicted – b stands for bidirectional – normally in mped it is a group of pictures of arround 12-15
g & b just give the information which is changing from the key frame

in mp4 it is up to 250 frames (10 seconds)
you can only cut on key frames if using a compressed codec

in avidemux you can define the keyframes (like one keyframe per second like every 25 frams / or half a second every 12 frames)

mp4 250 frames

Kino
http://www.linux-magazine.com/issue/34/Kino_Video_Editing.pdf

basic editing software like imovie

capturing from command line dvgrab ie344 standart (firewire) (dv compression only) – but the new video camera can play out mpeg

Jahshaka aims to become a cross-platform, open source, free, video editing, effects, and compositing suite. It is currently in alpha stage, supporting realtime effects rendering, but lacking useful implementations of many features such as the non-linear editing system. It is written using Trolltech’s Qt, but it’s user interface is written using an OpenGL library to create GUIs.

Once finished, it could potentially rival programs such as Newtek’s Video Toaster and Pinnacle Liquid Edition.

The fact that it uses OpenGL and OpenML allows it to be ported to several different platforms, in principle all platforms which have fast enough support for OpenGL.

Jahshaka is released under the GNU General Public License.

cinelarra
Cinelerra is a free non-linear video editing system for the GNU/Linux operating system. It is produced by Heroine Virtual, and is distributed under the GNU General Public License. Cinelerra also includes a video compositing engine, allowing the user to perform common compositing operations such as keying and mattes.

Cinelerra was first released August 1, 2002, and was based in part on an earlier product known as Broadcast 2000. Broadcast 2000 was withdrawn by Heroine Virtual in September 2001; Heroine Virtual cited legal liability concerns in the wake of litigation from the RIAA and MPAA and the costs involved in high-end video production.

Cinelerra is sometimes criticized for requiring too much computing power to run. Its authors counter that it is a professional program and that there are alternative programs for amateurs.

the most supported code mpeg 2
you can open the files and put the video whereever you want
the files are in media
you can put them directly into thetimeline

tools
assets is the films
titles is the file names

in the viewer you can set in and
you can add comments to the log file

you can put the videotransition between the clips, you can see the time with the right click, you can detach and change the effects.

you have different filters you can choose, like the polarising, freeze frame, blur chroma ke

there are three editing modes: drag & drop, cut& past & generate key frames while tweaking

andrev, valentina (from taly) works at kiberpipa (sign up mailinglist for cinelarra)

installing cinelerra

for debian unstable
kedit /etc/apt/sources.list

#cinelarra i386
dep http://www.kiberpipa.org/-minmax/cinelerra/buildes/sid/ ./
deb- src http://www.kiberpipa.org/-minmax/cinelerra/builds/sid/ ./
# additional psvkshrd noz gounf in Debians’s official repositiores, provided by Christian Marillat:
deb ftp://ftp.merim.net/debian-merillat/ sid main

there are issues about installing the multimedia package into official debian because of copyright – under french legislation it is possible (europe)

tangent_tv on archive.org

found on  new-media curating list:
To add to the discussion on TV, the programs V2 has organised during the
International Film Festival of Rotterdam is archived on archive.org.

The programs are located here:

DIY_TV
focused on the growing phenomenon of independent microTV broadcasters.
http://www.archive.org/details/tangenttv1

SILENT_TV
tele-matic workshop on how to build your own tv transmitter
http://www.archive.org/details/tangenttv2

GATED_TV
dealt with copyright issues vs open archives, open source, open archives.
http://www.archive.org/details/tangenttv3

TRUTH_TV
brought together several artists to discuss their work exploring the
nature of truth as represented by television.
http://www.archive.org/details/tangenttv4

DISH_TV
dealt with satellites and the constellations around global television.
http://www.archive.org/details/tangenttv5

The last one (AVANT_TV) will be added after the live session today
(starts at 1500 CET http://www.v2.nl/live).

adam

what is domain . Marchivertimar .