D.TV

The Glass Room: Big data, privacy and interactive art

Over 40,000 people visited The Glass Room in London and New York City. Hear how Tactical Tech and Mozilla are using immersive art to inform people about data security and privacy.

The Glass Room is an immersive, hands-on art experience that teaches people about personal data.

During two exhibitions, in New York City and London, over 40,000 people learned about how personal data is gathered, how it’s used, and the myriad types of data companies, governments, organisations and others access. A collaboration between Tactical Tech and Mozilla offers lessons in communicating about technical issues like data and privacy. It also provides insights into innovative public engagement strategies.

For a pedestrian on a busy street of London or New York, The Glass Room is a pop-up “tech store with a twist.” At first glance, it seems to offer the latest in shiny digital consumer products, such as the newest tablet or fitness tracker. But as you go inside, you find there’s nothing for sale. Instead, as you explore you’ll find a selection of art works exploring who is collecting our personal data and why, and what we can do about it.

The Glass Room brings together art, technology and the aesthetics of consumer retail. Visitors are presented with political questions about the data we share online with or without our knowledge. The exhibition creates a welcoming and accessible space that enables visitors to question their own assumptions about their digital lives and data, and to explore difficult and suppressed questions about their online activities.

A Glass Room tour. Photo by David Mirzoeff.

Tactical Tech has partnered with Mozilla to open Glass Room exhibitions in New York and London in the two years since  opening “The White Room” in a larger exhibition at the Berlin Haus der Kulturen der Welt.

Following The Glass Room New York, the New York Times reported:

“To move through the Glass Room…was to be reminded of the many ways we unwittingly submit ourselves and one another to unnecessary surveillance, with devastating consequences… I left the Glass Room invigorated by the ways artists are exploring the dark side of our digital footprint.”

Tactical Tech has worked on data and privacy for some years. We’ve seen many public awareness campaigns on data, privacy and surveillance, struggle to make a significant impact.

Putting together the Glass Room, we saw an opportunity to work with artists and their work to lead people outside their comfort zone and test their assumptions. By creating an immersive space we were able to test different forms of engagement through art objects, animations and products visitors could take away, such as our Data Detox Kits.

Data Detox Bar

People interacting at the Data Detox Bar. Photo by David Mirzoeff.

With The Glass Room we hoped we could bring debates and discourse highly prevalent in academia and technology activism to a broader audience. In a sense we hoped to fill the perception gap between niche discussions and issues on technology and popular media depictions of technology, such as Black Mirror, by presenting abstract and often speculative issues as a real-life, tangible, and even tactile, experience.

It turned out to be a timely intervention as large-scale data harvesting conversations enter more mainstream discourse. We challenged the audience’s willingness to engage with a broader and deeper reflection about the “quantified society,” the impact all-pervasive data sharing is having on our public sphere – transport, health, education – as well as on our sense of ourselves.

Following the success of the White Room in Berlin, we worked closely with Mozilla, artists, and designers from an experiential agency to expand and develop the concept to work in a retail context.

In both cities before opening, we launched advertising campaigns on billboards, in subway stations and online.

The exhibit was open for around three weeks in each city and over that period visitor numbers kept growing. By the final weekend in London we had reached our capacity inside and queues formed down the road.

The results: over 40,000 visitors attended, and there has been widespread media coverage, including articles in the New York Times, Channel 4 News, the BBCNew ScientistVogue and tech media such as The Verge. Social media activity was also sparked; for example, a Facebook Live event attracted over 47,000 viewers.

Audiences have been diverse. The majority of the visitors were ordinary passersby,who were either drawn in when walking past or heard about the exhibition from word of mouth. We attracted tourists, hipsters on the way to the cinema, families on a day out or simply people wandering in while shopping.

For the exhibits, we partnered with local artists to curate art works to add context and relevance so that the facts about how our personal data is being collected and used could come alive for non-technical visitors. We also installed a “Data Detox Bar” – at the back of the space in New York and right in the centre of the store in London – where people could pick up a “Data Detox Kit,” our easy 8-day guide to a digital makeover. We also created dedicated event spaces where we curated programmes involving activists, technologists, journalists, and some of our partner artists, where their own work around data and privacy was presented and discussed.

Glass Room visitors, 2017.

Glass Room visitors, 2017. Photo by Alistair Alexander.

Critically, for both cities, we recruited and trained a local group of “Ingeniuses” from diverse backgrounds and communities. Many had no experience in technology or privacy but after a four-day training camp, had enough knowledge to give privacy help and advice. They  were a constant and welcoming presence in the space, engaging visitors in conversations, providing explanations, and even offering some recommendations when such were sought. They also led workshops, free and open to everyone and to all levels of technical knowledge, with titles such as “WTF – What the Facebook,” “Mastering your Mobile” and “DeGooglize your Life.”

The exhibitions sparked a depth of engagement rarely seen in awareness-raising campaigns. Many visitors stayed for an hour or more; some stayed for an entire day to take free workshops. Lots of people came back for repeat visits, often bringing with them friends or family.

The Glass Room was open for around 3 weeks in both cities and over that period visitor numbers kept growing. By the final weekend in London we had reached our capacity inside and queues formed down the road.

In New York and London visitors filled out over 840 feedback cards. Some typical feedback:

After visiting the Glass Room I feel…

  • As if I’m finally accessing the vault control room. Shocked, enlightened, provoked.
  • Happy someone is showing us how our data is being used.
  • Interested in technology and data. I want to study technology and data. (from an eight-year old youngster)
  • Glad there is so much research keeping watch on corporate surveillance.
  • My mind has been opened.
  • Healthily paranoid.

After visiting the Glass Room I want…

  • To cleanse my online life and move away from Google.
  • To know more. It is not about becoming paranoid, it’s about being more prepared.
  • To get involved in protective steps to recover privacy for all.
  • To be more creative how I talk to people about privacy and security.
  • To talk to human beings more than ever.

Child at Glass Room London

A child at interactive Glass Room exhibit in London. Photo by David Mirzoeff.

What we learned

What can be taken away from this project? We think at least these things:

By setting up an exhibition in prime shopping locations, we were able to take issues of data and privacy to where people are, rather than hoping they‘d come to us.

By using art to explore these topics, we were challenging peoples’ assumptions in ways a conventional narrative can never achieve, and we were opening avenues for further enquiry.

By mirroring the design cues of tech stores, we used a visual language that everyone understands, and thus we were able to attract people who might well be put off by an art exhibition.

By taking the issues off line and creating an immersive physical space that was free and open to everyone, we created a public spectacle – an inclusive space where the experience of discovering these curious objects was shared with others, making the whole experience memorable and impactful.

And perhaps most crucially, by having a team of Ingeniuses who were from the local communities, we made the exhibition a vibrant, warm and human space – where visitors always had someone they could talk to.

Challenges

Of course, putting on a project of this scale has its limitations and challenges.

Renting prime retail locations and launching outdoor advertising campaigns doesn’t come cheap and we – like most small non-profits would – needed additional support. We were lucky that Mozilla proved to be the perfect partner. Not only did they have resources to do the project justice, but as the project progressed it became clear that their objectives and values were closely aligned with ours, and they gave us the freedom and support to develop the concept as far as we could.

Even with such an ideal partner though, an exhibition like this can only be temporary and will only reach a limited number of people in a very specific location. So we need to figure out ways of reaching more people, in more places, at substantially less cost.

Easier said than done, but we’re working on it. Over 2017 we developed a portable version of the exhibition, the Glass Room Experience. This kit recreates some of the objects from the main exhibition as 3D cardboard shapes and posters. We trialed this at around 10 events in Europe as we iterated the design. The kit development, production and distribution cost only a fraction of the main exhibition. And it provides small organisations and groups with materials that they can use to help explain these issues to their communities.

Although far smaller, the effect we have achieved on Experience visitors – over 7,000 of them to date – we believe, has been  highly positive and has allowed us to take the Glass Room to places and people who would never see the full Glass Room.

The future

We are working again with Mozilla to produce a new edition of the “Internet of Things” that will be distributed to 75 events and organisations around the world in 2018. At the same time, we plan to find  “multiplier” organisations that can print, distribute and support dozens of Glass Room Experienceevents themselves – allowing us to reach far beyond what our own capacity would allow.

The Glass Room experience kit development, production and distribution does have a cost, but it is a fraction of the main exhibitions. And crucially, The Experience provides small organisations and groups with materials that they can use to help explain these issues to their communities.

We’ll also be working with larger events and festivals to produce a mobile version of the exhibit, Glass Room Plus, which will be entirely self-funded. And we’ll be looking at developing new versions for schools and young people, libraries and universities, among many other audiences.

We’ll be looking at developing our self-learning resources online, so people can access more structured approaches to improving their online privacy.

We may yet open a major Glass Room pop-up store in another major world city in the future, but for now we’re working on promising avenues that can take Glass Room themes and narratives to yet new places and contexts: a hack lab in Lagos, a conference in Taipei, a crypto-rave in Brazil.

So the Glass Room project continues to evolve and develop. Indeed, it may be coming to you in 2018 – wherever and whoever you are. And if it’s not coming anywhere near, please get in touch – you could be the perfect Glass Room Experience host.


This story was written by Alistair Alexander of Tactical Tech. Alexander helped develop and implement The Glass Room project.

Top photo of Glass Room storefront by Nousha Salimi.

Visit the Glass Room London virtually, in WebVR

This past week, Mozilla and Tactical Tech launched The Glass Room in London. This “store” is actually an exhibition that disrupts how we think about technology and encourages people to make informed choices about their online lives. Now, anyone can experience the exhibit online and in real life.

Like a Black Mirror episode come to life, The Glass Room prompts reflection, experimentation and play. At first glance, it offers the latest in shiny digital consumer products, such as the newest tablet, fitness tracker or facial recognition software. But as visitors go inside, they find there is nothing for sale.

A closer look at the ‘products’ reveals works of art that peek behind the screens and into the hidden world of what happens to user data. The ‘Ingenius’ team is on hand to answer questions raised by the exhibit, engaging the public in conversation and helping them with alternatives, privacy tips and tricks.

This 360° view will take you deep inside the space and allow you to experience the exhibit everyone in London is talking about.

Experience The Glass Room London in WebVR

As a pioneer in WebVR technology, Mozilla believes this is an excellent opportunity to make this unique experience available on the web to everyone, everywhere, without the need to install an app or proprietary VR software. You simply click on the link and enjoy.

If you have an Oculus Rift or HTC VIVE, you can click on the VR icon to launch the Glass Room experience in WebVR. You can then immerse yourself in everything The Glass Room has to offer without taking a trip to London. All you need is the latest version of Firefox. WebVR for Firefox is enabled by default on Windows, so simply open the web site and you can explore the virtual Glass Room with your headset and hand controls. Mac users with headsets can download Firefox Nightly for early access to WebVR.

As this revolutionary technology develops, there will be more opportunities to create interactive exhibits, like The Glass Room, in VR that tell immersive stories on the web.

Visit vr.mozilla.org to find more experiences we recommend in WebVR including A- Painter, a VR painting experience.  If you’d like to learn more about the history and capabilities of WebVR, check out this Medium post by Sean White.

360º Video and Interactive Storytelling

During Open Fields conference Adnan presented “after.video – displaying video as theory and reference system”:

360º Video and Interactive Storytelling

Aigars CEPLITIS / Luis BRACAMONTES / Adnan HADZI / Arnas ANSKAITIS / Oksana CHEPELYK
Venue: The Art Academy of Latvia

Moderator: Chris HALES

Aigars CEPLITIS. The Tension of Temporal Focalization and Immersivity in 360 Degree 3D Virtual Space

The fundamental raison d’être for the productions of immersive technologies is the attainment of an absolute psychosomatic and physical embodiment. The impasse, however, for audiovisual works shot in 360° space, is that their current schemata as well as their visual configuration oppose the very type of an experience it strives to deploy. To crack the code of narrative design that would render 360° films to offer a truly immersive experience, a number of 360° video prototypes have been created and tested against the backdrop of Seymour Chatman’s narrative as well as Marco Caracciolo’s theories of embodied engagements in order to assess the extent of immersion in a variety of 360° narrative setting, zooming in on summary, scene, omission, pause, and stretch. Such prototype simulation is further followed by testing audiovisual plates whose micronarratives are structured in a rhizomatic pattern. Classical films are edited elliptically, although cut and omission are demarcated in cinema, with cut being an elliptical derivative, and in favor of using freeze frames to pause for a pure description. In 360° cinema, in turn, omission, cut, and pause, do not operate properly; its cinematic preference for here and now creates an inherent resentment to montage. Singulative narrative representations of an event (describing once what happened once) remains the principal core in spherical cinema, with repetitive representations deployed rarely, merely as special effects, or as a patterning device in flashbacks or thought-forming sequences through the post-digital editing style. The repetitive sequences in 360° become particularly disturbing, when their digital content is viewed, using VR optical glasses, instead of desktop computers, and such contrasts answer more fundamental questions as to whether montage is detrimental in 360° film, what types of story material and genre are more suitable for 360° cinema, and how do we gage the level of embodiment. Finally, the residual analysis of the before mentioned prototype simulation brings to the fore the rhizomatic narrative kinetics (the fusion of the six Deleuzoguattarian principles with the classic narrative canons), that should become, de facto, the language of 360°, if the embodiment is to be the key.

Biography. Aigars Ceplītis is the Creative Director of Audiovisual Media Arts Department at RISEBA University, where he teaches Advanced Film Editing Techniques and Film Narratology. He is also a PhD candidate at New Media MPLab, Liepaja University, where he is investigating novel storytelling techniques for 360 degree Cinema. Aigars has been working as a film editor in feature films “The Aunts”, “The Runners”, “A Bit Longer”, “Horizont”, and on 20 TV miniseries “The Secrets of Friday Hotel”. He has formerly served as an office manager and film editor for Randal Kleiser, an established Hollywood director best renown for hits such as “Grease” and “Blue Lagoon”. While in Los Angeles, Aigars headed the program of film and video for disadvantageous children of Los Angeles under the auspices of Stenbeck Family, the owners of MTG. Aigars holds an M.F.A. in Film Directing from California Institute of the Arts and B.A. in Art History from Lawrence University in Wisconsin.

Luis BRACAMONTES. Teleacting the story: User-centered narratives through navigaze in 360º video

This research explores the possibilities of a user-centered narrative strategy for 360º video through a new feature called “navigaze”. Navigaze is a feature introduced by the swedish Startup “SceneThere”, and it allows a controlled level of agency in the storytelling that borderlines between gaming and film. “Teleaction” here is understood from Manovich’s perspective of “acting over distance in real time” as opposed to “telepresence” implying only “seeing at a distance”. Immersive storytelling for Virtual Reality and 360º video presents a new challenge for creators: The dead of the director. At least in the traditional way as seen in other mediums such as theater or film. As its frameless quality and highly active nature demands for a fluid and flexible narrative.
Navigaze allows the user to inhabit a story instead of just witnessing it. By including a space-warp feature reminiscing to Google Street View, users can explore the “virtual worlds” and unravel pieces of the story on their own by gazing at the blue hotspots that transports them to the next location within the same world. Thus, the story constitutes a series of pieces of a puzzle that every person can put together as they want creating a unique narrative experience, similar to Julio Cortázar’s game-changing novel “Hopscotch” (1963).
Focusing on two pieces by SceneThere, “Voices of the Favela” (2016) and “The Borderland” (2017), this research delves into the evolution of storytelling on VR and 360º video and the early stages of the creation of their own narrative language.

Biography. Luis Bracamontes is a narrative designer and writer, specializing in Storytelling for VR and AR. He is an intern in the Virtual Reality start-up, “VRish” in Vienna. And he is currently studying an Erasmus Mundus Joint Master Degree in “Media Arts Cultures”, between Danube University Krems, Aalborg University and the University of Łódź. He has a B.A. in Communication Sciences with a specialization in Marketing and has worked for over 6 years in performing arts and literature. In 2014, he was awarded the “Youth Achievement Award for Art & Culture”, an honorary award given to the most promising and active youths with an outstanding trajectory in art and culture given by the City Hall of Morelia, for the work of his production company “Ala Norte”. Since 2015, he is a member of the Society of Writers of Michoacán (SEMICH). In 2016, he worked as an innovation and marketing consultant in the VIP Fellowship by Scope Group and the Ministry of Finance of Malaysia, in Kuala Lumpur. His recent research includes a paper on hybrid VR Narratives supervised by Oliver Grau, and a research project on post-digital archive experiences supervised by Morten Sondergaard, and presented at the NIME (New Interfaces for Musical Expression) International Conference 2017 in Copenhagen.

Adnan HADZI. after.video – displaying video as theory and reference system

After video culture rose during the 1960s and 70s with portable devices like the Sony Portapak and other consumer grade video recorders it has subsequently undergone the digital shift. With this evolution the moving image inserted itself into broader, everyday use, but also extended it ́s patterns of effect and its aesthetical language. Movie and television alike have transformed into what is now understood as media culture. Video has become pervasive, importing the principles of “tele-” and “cine-” into the human and social realm, thereby also propelling “image culture” to new heights and intensities. YouTube, emblematic of network-and online-video, marks a second transformational step in this medium’s short evolutionary history. The question remains: what comes after YouTube?
This paper discusses the use of video as theory in the after.video project (http://www.metamute.org/shop/openmute-press/after.video), reflecting the structural and qualitative re-evaluation it aims at discussing design and organisational level. In accordance with the qualitatively new situation video is set in, the paper discusses a multi-dimensional matrix which constitutes the virtual logical grid of the after.video project: a matrix of nine conceptual atoms is rendered into a multi-referential video-book that breaks with the idea of linear text. read from left to right, top to bottom, diagonal and in ‘steps’. Unlike previous experiments with hypertext and interactive databases, after.video attempts to translate online modes into physical matter (micro computer), thereby reflecting logics of new formats otherwise unnoticed. These nine conceptual atoms are then re-combined differently throughout the video-book – by rendering a dynamic, open structure, allowing for access to the after.video book over an ‘after_video’ WiFi SSID.

Biography. Dr. Adnan Hadzi has been a regular at Deckspace Media Lab, for the last decade, a period over which he has developed his Goldsmiths PhD, based on his work with Deptford.TV. It is a collaborative video editing service hosted in Deckspace’s racks, based on free and open source software, compiled into a unique suite of blog, cvs, film database and compositing tools. Through Deptford TV and Deckspace TV he maintains a strong profile as practice-led researcher. Directing the Deptford TV project requires an advanced knowledge of current developments in new media art practices and the moving image across different platforms. Adnan runs regular workshops at Deckspace. Deptford.TV / Deckspace.TV is less TV more film production but has tracked the evolution of media toolkits and editing systems such as those included on the excellent PureDyne linux project.
Adnan is co-editing and producing the after.video video book, exploring video as theory, reflecting upon networked video, as it profoundly re-shapes medial patterns (Youtube, citizen journalism, video surveillance etc.). This volume more particularly revolves around a society whose re-assembled image sphere evokes new patterns and politics of visibility, in which networked and digital video produces novel forms of perception, publicity – and even (co-)presence. A thorough multi-faceted critique of media images that takes up perspectives from practitioners, theoreticians, sociologists, programmers, artists and political activists seems essential, presenting a unique publication which reflects upon video theoretically, but attempts to fuse form and content. http://orcid.org/0000-0001-6862-6745

Arnas ANSKAITIS. The Rhetoric of the Alphabet

Through practice and research I aim to reflect on the connections between language, perception, writing and non-writing.
Jacques Derrida wrote in his seminal book Of Grammatology: “Before being its object, writing is the condition of the episteme”. I am curious – to what extent a written text still is (or should be) the condition of knowledge in artistic research? Does artistic research in general belong to and depend on this understanding of science? Would it be possible to do research without writing? How then one could share findings and outcomes of such research with the public and other researchers?
Writing interests me not only in the context of language, but also from the position of handwriting. How did letters of the alphabet emerge? It seems they were shaped by a human hand. What would letters look like, if they were written not on a flat sheet of paper, but in simulated three-dimensional space? In an attempt to answer the self-imposed question, I have created 3D models of cursive letters and exhibit them as video projections. In each visualization an imaginary writing implement produces an uninterrupted trace – a stroke on the writing plane. On this digitally-simulated stroke – the projection plane – a stream of texts and images is being projected.
I will attempt to combine two sides of artistic research (practice and theory) through writing – a writing system as an art project. Part of the doctoral thesis could be written and presented using this system.

Biography. Arnas Anskaitis (1988) is a visual artist, a lecturer and a PhD student at the Vilnius Academy of Arts. He employs a variety of media in his work, but always starts from a direct dialogue with the site and context in which he is working. His work has been shown at the Riga Photography Biennial (2016); National Art Museum of Ukraine, Kiev (2016); 10th Kaunas Biennale (2015); Contemporary

Oksana CHEPELYK. Virtual Reality and 360-degree Video Interactive Narratology: Ukrainian Case Study.

The aim of this thesis is to present some Ukrainian initiatives developing VR and 360-degree Video Interactive filmmaking: SENSORAMA in Kyiv and MMOne in Odesa. SENSORAMA as AN IMMERSIVE MEDIA LAB of VR reality grow VR | AR ecosystem in Ukraine by supporting talents with infrastructure, education, mentorship and investments.
An interactive documentary «Chornobyl 360», created by the founders of Sensorama Lab, filmed in spherical view of 360 degrees about Chernobyl Nuclear Power Plant, which was the site of Chernobyl disaster in 1986, is now proven to be in demand on the global market. The immersive technologies are used to change human experience in the fields that matter to millions: VR therapy research, healthcare etc. SENSORAMA is based in UNIT.city, a brand new tech park in Kyiv.
The company MMOne from Odesa has created the world’s first three-axis virtual reality simulator, in the form of chair attached to an industrial robot-like arm that moves in response to the action in a video game called Matilda. MMOne hopes the invention takes the global gaming industry in some entirely new directions. The startup debuted Matilda in October 2015 at Paris Game Weeks in France, presenting its device in cooperation with multinational video game developer Ubisoft, which created a racing game especially for Matilda called “Trackmania.” Since the Paris games exhibition, MMOne has had several big companies from the U.S. IT community ask to try out their chair, like Youtube, the Opera Mediaworks, world’s leading mobile advertising platform, Facebook’s Instagram, and Oculus LLC.

Biography. Dr. Oksana Chepelyk is a leading researcher of The New Technologies Department at The Modern Art Research Institute of Ukraine, author of book “The Interaction of Architectural Spaces, Contemporary Art and New Technologies” (2009) and curator of the IFSS, Kiev. Oksana Chepelyk studied at the Art Institute in Kiev, followed a PhD course, Moscow, Amsterdam University, New Media Study Program at the Banff Centre, Canada, Bauhaus Dessau, Germany, Fulbright Research Program at UCLA, USA. She has widely exhibited internationally and has received ArtsLink1997 Award (USA), FilmVideo99 (Italy), EMAF2003 Werklietz Award 2003 (Germany), ArtsLink2007 Award (USA), Artraker Award2013 (UK). Residencies: CIES, CREDAC and Cite International of Arts in Paris (France), MAP, Baltimore (USA), ARTELEKU, San Sebastian (Spain), FACT, Liverpool (UK), Weimar Bauhaus (Germany), SFAI, Santa Fe, NM, (USA), DEAC, Budva (Montenegro). She was awarded with grants: France, Germany, Spain, USA, Canada, England, Sweden and Montenegro. Work has been shown: MOMA, New York; MMA, Zagreb, Croatia; German Historical Museum, Berlin and Munich, Germany; Museum of the Arts History, Vienna, Austria; MCA, Skopje, Macedonia; MJT, LA, USA; Art Arsenal Museum, Kyiv, Ukraine; “DIGITAL MEDIA Valencia”, Spain; MACZUL, Maracaibo, Venezuela, “The File” – Electronic Language International Festival, Sao Paolo, Brazil; XVII LPM 2016 Amsterdam, Netherlands.

Art Centre, Vilnius (2014); 16th Tallinn Print Triennial (2014); National Gallery of Art, Vilnius (2012); Gallery Vartai, Vilnius (2012), and in other projects and exhibitions.

Valletta 2018 discusses Cultural Mapping in local and international contexts

Deptford.TV moved to Malta and became Dorothea.TV, still D.TV. We took part in the Cultural Mapping Conference.

The second Valletta 2018 international conference Cultural Mapping: Debating Spaces & Places opened at the Mediterranean Conference Centre, in Valletta, this morning. The conference focuses on cultural mapping, the practice of collecting and analysing information about cultural spaces and resources within a European and Mediterranean context.

Delivering the opening address, Valletta 2018 Foundation Chairman Jason Micallef spoke in light of the Syria conflict which is resulting in widespread destruction of several cultural resources, such as heritage sites, in the Mediterranean region.

“Against this background cultural mapping takes on a renewed importance, not only in preserving the existing heritage of communities, but particularly in disseminating this knowledge through new, global channels and technology, forging new relationships between people across the world,” Jason Micallef said. “The examples of cultural mapping presented during this conference will allow us to dream of new ways in which the knowledge and understanding of our shared histories and our shared futures can be spread across the world”.

Bringing together a number of international academics, researchers, cultural practitioners and artists, the conference will explore various exercises of cultural mapping taking place across the world. With the subject being relatively new to Malta, speakers will be discussing the role of cultural mapping and how it can influence local cultural policy, artistic practice, heritage and cultural identity, amongst others.

The conference is being organised following last April’s launch of www.culturemapmalta.com – the online map exhibiting the data collected during the first phase of the Cultural Mapping project, led by the Valletta 2018 Foundation. Speakers include experts, academics, researchers and activists within the fields of tangible and intangible heritage, sustainable development, and cultural policy, both across Europe, the Mediterranean and beyond. Keynote speeches will be delivered by Prof. Pier Luigi Sacco, a cultural economist who will be presenting examples of cultural mapping taking place in Italy and Sweden, and Dr Aadel Essaadani, the Chairperson of the Arterial Network, a Morocco-based organisation that brings together art and culture practitioners across the African continent.

The conferrence is being organised by the Valletta 2018 Foundation in collaboration with the Centre for Social Studies (CES), University of Coimbra. Creative Europe Desk, the European Commission Representation Office, EU-Japan Fest Committee, the French Embassy, Fondation de Malte and Spazju Kreattiv are also supporting the event.

https://www.culturemapmalta.com/#/
https://valletta2018.org/cultural-mapping-publication/
https://valletta2018.org/news/cultural-mapping-conference-registration-now-open/
https://valletta2018.org/events/subjective-maps-hamrun-workshop/
https://valletta2018.org/events/subjective-maps-birzebbuga-workshop/
https://valletta2018.org/events/subjective-maps-valletta-workshop/
https://valletta2018.org/news/e1-5m-awarded-to-valletta-2018-european-capital-of-culture/
https://valletta2018.org/objectives-themes/
https://valletta2018.org/cultural-programme/naqsam-il-muza/
https://valletta2018.org/events/naqsam-il-muza-gzira/
https://valletta2018.org/news/naqsam-il-muza-art-on-the-streets-of-marsa-and-kalkara/
http://heritagemalta.org/
https://muza.heritagemalta.org/
https://valletta2018.org/events/naqsam-il-muza-marsa/
https://valletta2018.org/events/naqsam-il-muza-kalkara/
https://valletta2018.org/events/naqsam-il-muza-the-art-of-sharing-stories/
https://valletta2018.org/news/naqsam-il-muza-the-art-of-sharing-stories/
https://valletta2018.org/organised_events/muza-making-art-accessible-to-all/
https://valletta2018.org/news/nationwide-participation-of-the-valletta-2018-cultural-programme/
https://valletta2018.org/events/psychoarcheology-fragmenta-event-with-erik-smith/
https://valletta2018.org/events/fragmenta-imhabba-bl-addocc/
https://valletta2018.org/events/fragmenta-from-purity-to-perversion/
https://valletta2018.org/events/fragmenta-untitled-ix-xemx/
https://valletta2018.org/events/fragmenta-outside-development-zone-odz/
https://fragmentamalta.com/
https://valletta2018.org/events/fragmenta-hortus-conclusus/
https://valletta2018.org/events/film-screening-blind-ambition-and-qa-with-hassan-khan/
https://valletta2018.org/events/fragmenta-outside-development-zone-odz/
https://valletta2018.org/events/get-your-act-together-science-in-the-city/
https://valletta2018.org/events/notte-bianca/
https://valletta2018.org/events/malta-book-festival/
https://valletta2018.org/events/wrestling-queens/
https://valletta2018.org/events/rima-digital-storytelling-workshop/
https://valletta2018.org/cultural-programme/recycled-percussion/
http://latitude36.org/
https://valletta2018.org/latitude-36/
https://valletta2018.org/news/latitude-36-call-for-maltese-living-abroad/
https://valletta2018.org/cultural-programme/latitude-36/
https://valletta2018.org/bar-europa-is-good-for-the-spirit/

Alexa, Who is Joybubbles?

Alexa, Who is Joybubbles?

prix des beaux arts geneve

Horaire Salle Crosnier (jours fériés inclus)
Mardi–Vendredi   15:00 – 19:00
Samedi                  14:00 – 18:00

Jeudi 2 novembre, ouvert jusqu’à 20h30.

Alexa, Who is Joybubbles est le fruit d’une collaboration entre !Mediengruppe Bitnik et le compositeur de musique électronique Philippe Hallais. C’est une chanson qui réactive le souvenir de Joybubbles, le premier pirate téléphonique, et imagine son intervention dans le réseau des appareils domestiques connectés d’aujourd’hui et sa rencontre avec les applications d’assistants personnels.

Les pirates téléphoniques, dont l’activité remonte aux années soixante, étaient d’avides et espiègles explorateurs du réseau téléphonique. Ce réseau les fascinait car il était le premier réseau, le premier ordinateur en fait, et qu’il connectait le monde entier. L’un des pionniers et des plus doués d’entre eux était Joybubbles (25 mai 1949 – 8 août 2007), né Josef Carl Engressia Jr. à Richmond enVirginie, aux États-Unis. Aveugle de naissance, il a commencé à s’intéresser au téléphone à l’âge de quatre ans. Tout jeune, il avait déjà découvert comment passer des appels gratuitement. Il avait l’oreille absolue et était capable de siffler 2600 hertz, la fréquence que les opérateurs utilisaient pour acheminer les appels et produire les connexions et déconnexions. Joybubbles a donc été l’un des premiers à explorer ce réseau et à en apprendre les codes, avec son seul souffle. Pour produire ce son, les autres pirates utilisaient des appareils qu’ils fabriquaient eux-mêmes. Joybubbles a agi comme un catalyseur, unissant des pirates aux activités diverses en l’un des premiers réseaux sociaux virtuels. Alors qu’il était aveugle, le téléphone lui donnait accès à un réseau de personnes partageant ses intérêts tout autour du monde. Après l’annonce de son renvoi de l’université en 1968 et sa condamnation en 1971 pour infractions téléphoniques, il est devenu le centre névralgique du mouvement. Les pirates ont découvert qu’ils pouvaient utiliser certains commutateurs téléphoniques comme ceux des salles de conférence pour que le groupe, géographiquement dispersé, puisse discuter et échanger des idées et des connaissances via l’appel d’un même numéro, créant ainsi un réseau social bien avant l’internet.

!Mediengruppe Bitnik & Phillipe Hallais se sont penchés sur les méthodes de ces premiers hackers pour engager un dialogue avec Alexa et ses collègues assistants personnels intelligents. Ces dispositifs, entités semi-autonomes, commencent tout juste à coloniser nos habitats. Ils font partie d’un nouvel écosystème d’appareils connectant l’espace physique à l’espace virtuel et dont le contrôle s’effectue vocalement. Ces appareils ont une certaine capacité d’action, ils agissent selon un ensemble de règles et d’algorithmes. Ces algorithmes et  ces règles ne sont pas dévoilés à l’utilisateur, de même que les données que ces appareils récupèrent. L’utilisateur n’a donc aucune emprise sur le fonctionnement de ces appareils, il ne peut en évaluer la partialité. Il ne peut savoir quelles données à son sujet sont collectées par l’appareil, quelles informations en sont extraites puis partagées avec d’autres appareils et d’autres entreprises.

Que fera Joybubbles avec ces appareils à commande vocale ? Qui est-ce qui agit quand ces appareils agissent ? Est-ce bien moi qui commande de la nourriture lorsque mon frigo décide de faire des provisions ? Et que se passerait-il s’il était hacké et envoyait plutôt des spams ? Lorsque je m’entoure de ces appareils semi-autonomes, ma capacité d’action est-elle étendue ou au contraire diminuée ? Que se passe-t-il lorsque l’un de ces appareils se déclenche au son d’une chanson qui passe à la radio ?

La musique fait ici référence à la grande influence que les téléphones mobiles ont pu avoir sur la musique populaire contemporaine comme le dancehall et le ragga. Depuis les années 90, lorsque ces téléphones ont commencé à être partie prenante de nos vies, ils ont influencé la manière dont la musique était produite. Dès ses début, le téléphone mobile a fait office de sound système portatif, tout d’abord en utilisant des chansons populaires pour ses sonneries, puis grâce à l’accès à internet procuré par les smartphones. Avec l’instrumentation électronique gagnant du terrain depuis les années 80, le son du dancehall a considérablement changé pour devenir de plus en plus caractérisé par des séquences intrumentales (ou « riddims »). Les sonorités typiques des téléphones mobiles sont devenues une véritable source de samples. Alexa, Who is Joybubbles est un hommage à l’usage du téléphone dans le dancehall, et à Joybubbles. ♥‿♥

Philippe Hallais est un compositeur de musique électronique né en 1985 à Tegucigalpa au Honduras et qui vit à Paris. Sa musique joue de la réappropriation des clichés sonores, du folklore médiatique et de la multiplicité des langages musicaux associés aux subcultures de la dance. Il a publié à ce jour trois albums sous le pseudonyme de Low Jack (Garifuna Variations, L.I.E.S, 2014 ; Sewing Machine, In Paradisum, 2015 ; Lighthouse Stories, Modern Love, 2016) et un sous son propre nom : An American Hero sur le label Modern Love, en 2017. En concert, il a déjà collaboré avec les musiciens Ghedalia Tazartès et Dominick Fernow / Vatican Shadow et a conçu des performances pour le musée du Quai Branly, le Centre culturel suisse et la Fondation d’entreprise Ricard à Paris

Le duo !Mediengruppe Bitnik (lire : le non mediengruppe bitnik) vit et travaille à Berlin et à Zurich. Les deux artistes contemporains font d’Internet leur sujet et leur matériel de travail. Leur pratique part du numérique pour transformer les espaces physiques et se sert régulièrement d’une perte de contrôle intentionnelle pour défier les structures et les mécanismes établis. Les œuvres de !Mediengruppe Bitnik formulent des questions fondamentales sur des problèmes contemporains.

!Mediengruppe Bitnik est composé des artistes Carmen Weisskopf et Domagoj Smoljo. Leurs complices sont le réalisateur et chercheur londonien Adnan Hadzi et le reporter Daniel Ryser. Ils ont reçu, entre autres, le Swiss Art Award, le New Media Jubilee Award Migros, le Golden Cube Dokfest de Kassel et une mention honorifique à Ars Electronica.


Le Prix de la Société des Arts • Arts Visuels • Genève 2017
(Calame • Diday • Harvey • Neumann • Spengler • Stoutz)
est décerné à !Mediengruppe Bitnik.

!Mediengruppe Bitnik est un duo composé de Carmen Weisskopf (*1976, Suisse) et Domagoj Smoljo (*1979, Croatie). Les deux artistes vivent et travaillent à Zurich mais résident actuellement à Berlin. !Mediengruppe Bitnik utilise Internet à la fois comme sujet et comme matériau de son travail artistique, partant du numérique pour transformer l’espace physique. Le duo s’attaque à des problèmes d’actualité et emploie souvent des stratégies de perte de contrôle qui défient les structures et dispositifs existants.

Ce prix est attribué sur la base de recherches effectuées en toute indépendance, sans concours, par les membres du jury constitué cette année par Felicity Lunn et composé de : Ines Goldbach (directrice Kunsthaus Baselland), Valerie Knoll (directrice de la Kunsthalle Bern), Boris Magrini (historien d’art et curateur indépendant ; expert indépendant pour Pro Helvetia, section arts visuels), Laurent Schmid (artiste et professeur à la HEAD, Genève ; responsable Master Arts visuels – Work. Master), Séverine Fromaigeat (historienne et critique d’art ; membre de la commission des Expositions de la Classe des Beaux-Arts de la Société des Arts, Genève).

48 Hours MIND LESS

STWST48x3
48 Hours MIND LESS
8. – 10. Sept 2017

Unter dem Motto MIND LESS bietet STWST48x3, die dritte Ausgabe von STWST48, eine 48-Stunden-Showcase-Kunst-Extravanganza der expandierenden Art. Sinnfreie Information, offene States of Mind, ein Infolab nach den neuen Medien, Quasi-Koordinaten der erweiterten Kontexte, Funky Fungis, Digital Physics und Meltdown Totale: STWST48x3 MIND LESS bringt neue Kunstkontexte, die in den letzten Jahren in und rund um die Linzer Stadtwerkstatt entwickelt wurden. Watch out: Die MIND LESS Stadtwerkstatt steht auch 2017 unter der Direktive von New Art Contexts und autonomen Strukturen.

Start: Freitag, 8. September, 14 Uhr
Ende: Sonntag, 10 Sept, 14 Uhr

RÜCKSCHAU – ALLE VIDEOLINKS ZU DEN PROJEKTEN

MAZI: CAPS community workshop in VOLOS

CAPS Community workshop is taking place on the 12th July

We’re still working on the agenda. Here below you’ll find a first overview of the activities of each day:

10/7/2017

MAZI Workshop – (all day). Hands on experiences tutorial, learn how to use & set-up the MAZI toolkit:

09:30 – 09:45
Welcome and introduction to MAZI

By Thanasis Korakis

09:45 – 10:30
Keynote talk: Digital Commons, Urban Struggles and the Right to the City?

Andreas Unteidig and Elizabeth Calderon Luning

10:30 – 10:45    Coffee break

10:45 – 11:30
MAZI stories
  • 10h45-11h00 Creeknet ‘Bridging the DIY networks of Deptford Creek’ (Mark Gaved and James Stevens)
  • 11h00-11h15 Living together: realistic utopias in Zurich (Ileana Apostol and Philipp Klaus)
  • 11h15-11h30 Unmonastery: a 200 year plan (Michael Smyth and Katalin Hausel)
11:30 – 13:00
The MAZI toolkit and its applications

Harris Niavis and Panayotis Antoniadis

13:00 – 14:00    Lunch break

14:00-17:00
Hands-on experience with the MAZI toolkit and participatory design

The audience will be split in 4 (or more) groups. Each group will have a MAZI leader, one of the partners, for guiding the whole process of the MAZI toolkit deployment. MAZI leaders will describe to each group the context in which they are going to configure their MAZI Zone. Some possible scenarios/contexts in the area around the event will be defined, where they could deploy MAZI Zones and help also the CAPS event for the whole week.
*Please bring your laptop with you or any other equipment (Raspberry pi 3, miniSD cards etc.),so you can actively participate in the workshop.

17:00 – 18:00
Wrap-up of the workshop

11/7/2017

  • MAZI Review (closed meeting – all day) Download here the agenda (PDF)
  • HACKAIR – Project Review Meeting (closed meeting – all day)
  • Greek CAPS & H2020 cluster workshop  (15:00 – 18:00)

CHAIN REACT Workshop – Hands on experiences

12/7/2017

2nd CAPS Community workshop

13/7/2017

EMPAVILLE Role Play (run by EMPATIA Project)

11:00 – 12:30

Empaville is a role-playing game that simulates a gamified Participatory Budgeting process in the imaginary city of Empaville, integrating in person deliberation with digital voting. For more details visit EMPAVILLE ROLE PLAY (https://empaville.org)

PROFIT Workshop (Open meeting – half day)

13:00 – 17:00 Download here the agenda (PDF)

  • Project introduction (M.Konecny – EEA)
  • Financial literacy and ecnomic behaviour for financial stability and open democracy (G.Panos – UoGlasgow)
  • Promoting financial awareness and stability (Artem Revenko – Semantic web company) Presentation available here (PDF)
  • Textual analysis in economics and finance (I.Pragidis – DUTH) Presentation available here (PDF)
  • What’s ethical finance (Febea) Presentation available here (PDF)
  • Walkthrough of the PROFIT platform
    • Discussion in small groups focused on different aspects of the project & platform
  • Conclusions and wrap-up

Note: As this is an interactive event please bring a laptop so you can contribute to the research effort.

14/7/2017

CROWD4ROAD hands on experiences (open meeting – all day)

09:00 – 09:30
Crowd4roads: crowdsensing and trip sharing for road sustainability

Presentation by the Crowd4roads consortium

9:30 – 10:00
Collaborative monitoring of road surface quality

Presentation by University of Urbino

10.00 – 10:30
Car pooling and trip sharing

Presentation by Coventry University

10:30 – 11:00    Coffee break

11:00 – 11:30
Hands on the first release of the Crowd4roads app
11:30-12:15
Hands on Crowd4roads open data
12:15 – 13:00
Gamification strategies for engagement

Presentation by Coventry University

  • Closing Plenary – Wrap-up and Greek cocktail (5-7 pm)

The Next Generation Internet (NGI) initiative, launched by the European Commission in autumn 2016, aims to shape the future internet as an interoperable platform ecosystem that embodies the values that Europe holds dear: openness, inclusivity, transparency, privacy, cooperation, and protection of data. The NGI should ensure that the increased connectivity and the progressive adoption of advanced concepts and methodologies (spanning across several domains such as artificial intelligence, Internet of Things, interactive technologies, etc.) drive this technology revolution, while contributing to making the future internet more human-centric.

This ambitious vision requires the involvement of the best Internet researchers and innovators to address technological opportunities arising from cross-links and advances in various research fields ranging from network infrastructures to platforms, from application domains to social innovation.

Live: Algorave @ Archspace in London

Our friend Mattr performed in the Archspace in London. Jack Chutter wrote the following review in the ATTN:Magazine:

When I initially heard about live-coding, I was quick to presume that it was beyond my technical grasp. After all, surely this music was the reserve of those who have spent their lives immersed in programming, hidden behind a wall of education and natural computer aptitude, forbidden to the layman – me, for example – who should probably stick to more tangible forms of instrumental causality (hitting a drum, pressing a key). Yet coupled with my recent interview with Belisha Beacon (who went from code novice to Algorave performer within a matter of months), my experience tonight has convinced me otherwise. That’s not to say that Algorave doesn’t regularly slip beyond my technical comprehension, folding code over itself to produce spasmodic, biomechanical bursts of light and sound, ruptured by compounded multiplications and tangled up in polymetric criss-cross. Yet with the code projected upon a large screen in front of me, I see that these transparent mechanics are often painfully easy to understand. A line of code is activated: a sample starts. An empty space is deleted: a rhythmic pattern shifts one step. I witness the preparation and execution of “sudden” bass drops; I am exposed to the application of effects and shifts in pitch. At some points at least, I totally get this.

Each set is accompanied by digital projections, most of which are live-coded by the evening’s two visual artists (Hellocatfood and Rumblesan). As I walk into Archspace, the screen is brimming with these vibrant, hyperventilating spheres, all spinning at incredible speed, expanding and contracting as though set to burst. Meanwhile, the electronic pulses of Mathr feel way overcharged, bloating beyond their own mathematical confines, bleating like the alarms that herald the opening and closing of space shuttle doors or slurred bursts of laser beam. A rhythm is present but it never walks in a straight line. It’s constantly correcting itself incorrectly, sliding and slanting between shapes that never properly fit, modulating between various flavours of imbalance.

The sounds remain very much askew for Calum Gunn, although now it’s like someone playing a breakbeat remix of a Slayer track on a scratched disc, choking on the same split-second of powerchord as the beat rolls and glitches beneath, quickly abandoning all sense of rightful rhythmic orientation. Later it’s all synth hand-claps and digital squelches, exploded into tiny fragments that whoosh dangerously close to my ears. On screen, a square cascades like a spread deck of cards, fanning outward across future and past iterations, losing outline and angle in the overlap of shifted self. Together, sound and image shed all time-space integrity, knotted and crushed by the layers of multiplication and if-then function, complicating their own evolution until they can’t possibly find its way back again.

It’s almost as though Martin Klang has witnessed this chaos and taken heed. His music is carefully and precariously built, stacked in a brittle tower of drones and ticks and pops. Hi-hats spill out like coins on a kitchen floor (whoops – not quite careful enough), as the beat tip-toes between them in a nervous waltz, accompanied by what sounds like the glugging, croaking proclamations of an emptying kitchen sink. All adjustments are patiently negotiated. The soundscape switches from a duet between drone sweeps and popping fuses, to an ensemble of water-drenched bouncy balls of various sizes. I feel tense as I watch this music unfold, as though Klang’s synth might explode with just one heavy-handed application of change.

IMG_0846

MARTIN KLANG + RUMBLESAN

This threat doesn’t lift as we move into Miri Kat’s thick, disaster prone dub, although her sound is fearless and indifferent to it: rhythms forward-roll into mists of radiation, smacking into hydraulic doors and tangling itself in the zaps of crossed electronic wires. The rhythm comes and goes in huge obelisks of volume and visceral bass frequency, announcing themselves with thundering severity and then dropping out, allowing myth and ambient chimes to pool in the stretches of absence. At its loudest her performance is incredibly visceral, the beats wracked with the noises of ripping open, or tectonic electronic rumbling, or up-ended boxes of micro-sampled ticks and trinkets. Meanwhile, the visuals come in a gush of glitch-ruptured Playstation animation and over-zealous zoom lens, plunging into pixelated colours and flicker and burst into the far corners.

Archspace is packed out by now. Contrary to my silly assumptions that Algoraves would be an elitist, ultimately cerebral affair, a vast majority of the crowd are dancing (in fact, it is me – pinned against the side wall with my head in my notebook – who could be most readily accused of forgoing visceral enjoyment in favour of lofty pontification). This interface between code and human rhythm is further explored by Canute, tasking a live drummer to find a foothold within an ever-modulating algorithmic output, with snare drums snatching at synthesisers splayed in scattershot and krautrock 4/4s ploughing forward through a hail of ping-pong delays, while micro-samples bounce off the windscreen of those thoroughly human hits. The coded output slots into a new pacing and the drummer realigns accordingly, swerving in and around those blocks of binary exactitude, tumbling across those flatulent bursts of morse code and synthetic mandolin.

It’s a dizzy experience, and Heavy Lifting only nurtures the nausea further. Her set is like having a surreal dream while travel sick, head slumped out of the window of a gigantic cruise-liner, with the excitable voices of in-boat entertainment in one ear and the churn of the sea in the other. Spoken samples and revved motors whirl over beats that throb like an insistent headache, as samples eat themselves and fold over one another, blurred by the waves of sickness or sliced up into phonetic digital chirps. The beat throbs at ever-louder volumes. My heartbeat lodges itself in my head. Somewhere in the mixture of slur and abrasion – the precise combination of ambient wave and brash attack – Heavy Lifting strikes upon a strange form of ecstasy, as a dense rhythm rises from beneath and pushes the quease and wooze aside. It’s wonderful.

Due to a route closure and lengthy diversion affecting my journey home to Cheltenham, my own Algorave concludes tonight with a set from Belisha Beacon (my apologies to tonight’s final act, Luuma, who I had to miss as a result). Her improvisation builds itself, deconstructs itself, reshapes itself. There are no foundations, there is no final form. Raw samples (digital woodwind, dry synthetic percussion) enter one at a time and slide into place, methodically adding new angles and asymmetries to the overall shape, compounding individual decisions into a network of intersecting pulses and chimes. The 4/4 clicks into place as the rhythm shunts into a continuous stomp, finding momentary alignment before Belisha Beacon starts to pick apart the shape all over again. This methodical transparency is what sold me on the accessibility and open possibilities of live-coding. While some of tonight’s performances explore the potential of enacting numerous ideas simultaneously, splaying the code like projected firework embers, others explode the software mechanics into a series of singular steps. It’s like witnessing a film and its “making of” simultaneously, with my enjoyment of each beat enriched by the ability to share in the very spell that brought it to be.

Creeknet XF Symposium

Its been a very hectic few weeks at SPC as we bring focus onto DIY networks of Deptford Creek at the first Creeknet Symposium on 20th and 21st June.

The poster here for you to print and put up in your window, outlines event details which can be found in full on the SPC event listings and at http://deptfordcreek.net

The Creeknet friends have been meeting regularly at venues up and down the creek. We have been exploring the fast changing environment and revisiting access points onto the river, crossing bridges and improving an understanding of local concerns and ambitions. The last of these before summer takes hold is on Monday 12th June at noon, in the Undercurrents gallery inside the Birdsnest pub on Deptford Church Street. We will be collecting together images and stories to publish at the local network Anchorholds, a trail of information points along the creek, so please do come along to contribute your experiences !

Rapid progress was made by the very energetic Hoy Steps clear-up group on Monday 5th June. The huge overgrowth of Buddleia clogging views, was cut down and disposed of in a flurry of action and enthusiasm. The vigorous roots of this plant have got deep into and have damaged the sea wall and will continue to regrow unless more drastic measures to remove remnants are adopted soon, even then they are likely to return!

Wooden pallets stored at street level have been sorted and stacked ready for re-use or removal and the rubbish sheet materials, plastic wrappers and polystyrene are bagged ready for disposal. We return early on Tuesday 13th to complete the clean-up process in preparation for a public viewing during Creeknet Symposium the following week.

Friends of Deptford Creek is a community group set up to support, represent and protect the human, natural and built environment of Deptford Creek, London. How do these two different groups work together? How does the changing landscape affect them? What technologies can help?

Find out by joining us over an exciting two days of public meet ups and workshops to exchange ideas and explore the DIY networks of Deptford Creek<http://deptfordcreek.net/>.

Meet MAZI<http://mazizone.eu/> partners from around Europe, Chat to local community groups, and play with our technology that support local networks and discuss what’s next for Deptford. You can attend all or parts of these events over the two days by registering with Eventbrite here:

Tuesday 21st June 2017
Wednesday 22nd June 2017

For further information please visit this website.

This week starting Monday 12 June, we have a busy schedule to install equipment, complete work and do last minute promotion. (really!). Today we are meeting at the Undercurrents Gallery in the Birdsnest pub to update the mazizone prototype there and meet with local mariners and artists to discuss their network systems. On Tuesday it’s an early Lowtide and 10AM return to the Hoysteps to complete clearup work and prepare for a visit the following week, refreshments provided. After lunch we will be installing bluetooth beacons along the creek to mark out the Anchorhold locations

Wireless Wednesday at http://bit.spc.org this week is dedicated to preparation of print materials for distribution during the Creeknet Symposium so please come along and help out, but please hold off on the broken PC’s for a couple of weeks! On Thursday and Friday, We will be testing the Creeknet Anchoholds app, a guide to the DIY networks of Deptford Creek. If you would like to help out please call for more details as we will be working along the length of the tidal creek from Brookmill Park to the Swing bridge.

http://friends.deptfordcreek.net

Don’t forget to tell us you are attending Creeknet Symposium, not least so we can arrange catering! Please register.

Creeknet meet-up @ Hoy

The MAZI Project is working on an alternative technology, Do-It-Yourself networking, a combination of wireless technology, low-cost hardware, and free/libre/open source software (FLOSS) applications, for building local networks, known as community wireless networks.

Surrender to Surveillance

Our friend Heath Bunting gives a talk @virtualfutures:

Virtual Futures presents a discussion on technologies of surveillance, the infringements on privacy by the state, restrictions of individual freedom and the mutation of identity. Underlying the platforms that make digital communication possible are massive stealth efforts in social profiling. This reality has become passively accepted by a user base who are sold the promise of personalisation and customisation in exchange for allowing increased data extraction and analysis.
Corporations target social life itself, aiming to monitor their users ever more effectively and regulate certain types of action. As identity, social interaction and profit overlap it threatens core human values such as freedom and privacy, as well as posing new ontological questions concerning what constitutes identity. Join an artist who has been described as ‘a disciplined advocate of a transgressive social and political anarchy’ a professor who is exploring the impact of digital media on society and politics, a journalist who specialises in privacy, and others to discover how we might develop a toolset for escaping the ever-intensifying surveillance and monitoring of our society.

Virtual Futures presents a discussion on technologies of surveillance, the infringements on privacy by the state, restrictions of individual freedom and the mutation of identity.

Underlying the platforms that make digital communication possible are massive stealth efforts in social profiling. This reality has become passively accepted by a user base who are sold the promise of personalisation and customisation in exchange for allowing increased data extraction and analysis.

Corporations target social life itself, aiming to monitor their users ever more effectively and regulate certain types of action. As identity, social interaction and profit overlap it threatens core human values such as freedom and privacy, as well as posing new ontological questions concerning what constitutes identity.

Join an artist who has been described as ‘a disciplined advocate of a transgressive social and political anarchy,’ a professor who is exploring the impact of digital media on society and politics, a journalist who specialises in privacy, and an expert on corporate data monopolies to discover how we might develop a toolset for escaping the ever-intensifying surveillance and monitoring of our society.

Fireside Chat

Heath Bunting, Artist

Heath Bunting is known as an early practitioner of the net.art movement. As his online biography reports, “He is banned for life from entering the USA for his anti-genetic and border crossing work. He has had multiple works of art censored and permanently deleted (including all copies and backups) by the UK security services.

He has had an artwork exploded by the SAS and is prevented from talking about this in public. He has been detained, arrested multiple times and classified as a terrorist by UK security services for his art projects. He is subject to constant global state and corporate hostile interventions. He is denied full access to the internet and is almost constantly unemployed as a result of being politically blacklisted. In an environment where the UK Ministry of Defence can publicly state that their primary global adversary is the non-state individual artist, he now produces his art projects securely and in secret.

He has been approached by both state and corporate security organisations on several occasions, but mostly declined these offers of work, especially when it involved the assassination of social justice activists. His main work, The Status Project, involves using artificial intelligence to search for artificial life in societal systems. Aside from this, he is currently training artists in security and survival techniques so they can out-live organised crime networks in the forest during the final crisis.

Panelists

Prof. David Berry, Co-Director of the Sussex Humanities Lab and the Research Centre for Digital Materiality, University of Sussex (@BerryDM)

Prof. Berry researches the theoretical and medium-specific challenges of understanding digital and computational media, particularly algorithms, software and code. His work draws on critical theory, political economy, medium theory, software studies, and the philosophy of technology.

Heath Bunting, Artist

Early practitioner of the net.art movement.

Wendy M. Grossman, Technology Journalist (@wendyg)

Freelance technology writer specializing in computers, freedom, and privacy. She has written for the Guardian, the Daily Telegraph, Scientific American, New Scientist, Infosecurity Magazine, and Wired, and was the 2013 winner of the Enigma award for lifetime achievements.

Roger Taylor, Chair, Open Public Services Network (@RTaylorOpenData)

Roger Taylor is an entrepreneur, regulator and writer. He is chair of Ofqual, the qualifications regulator, and works with The Careers & Enterprise Company on the use of technology and data in career decisions. He has written two books: God Bless the NHS (Faber & Faber 2014); and Transparency and the Open Society (Policy Press 2016) which outlines the dangers of government and corporate data monopolies. He founded and chairs the Open Public Services Network at the Royal Society of Arts; he is a trustee of SafeLives, the domestic abuse charity; and he sits on the advisory board to HM Inspectorate of Probation. He has worked with governments, NGOs and leading media organisations globally on the use of open data and public reporting. Roger began his career as a journalist working as a correspondent for the Financial Times in the UK and the US and, before that, as a researcher for the Consumers’ Association.

Dicussion moderated by:

Luke Robert Mason, Director of Virtual Futures (Moderator) (@LukeRobertMason)

Schedule

06:30pm – 07:00pm: Registration & Drinks

07:00pm – 07:30pm: Fireside chat with artist Heath Bunting on ‘The Staus Project.’

07:30pm – 08:30pm: Panel Dicussion on ‘Surrender to Survailance’ with Prof. David Berry, Heath Bunting, Wendy M. Grossman & Others (TBC)

8:30pm – Late: Audience Q&A, Drinks & Networking

Creakynet

We visited the Hoy Steps again today to view the condition of the street level area inside the gates and assess the clean up task. After 20 years of restricted access, an accumulation of old wheels, wooden pallets and tangle of Buddleia block the steps. There is also a large amount of scaffolding framing the space which can be used again in any reconstruction plan. High tide at midday prevented us seeing more than half a dozen of the twenty steps that lead down to the muddy shoreline at low tide. A ferry once transported people across the creek to Greenwich at this point. The ‘hoy!’ call out to summon a boat was first heard here hundreds of years ago.

Our previous Creeknet meet-up started out at Laban Dance cafe’ from where we walked down Creekside and over the Ha’penny Hatch footbridge to the fork in the path on the Norman Road side. Thames Tideway are rapidly completing this controversial diversion, whilst setting out their groundwork for an 18m diameter x 60m deep excavation to a 12m diameter 30km sewage overflow tunnel from Ealing on route to Becton.

Waste removal from the Greenwich Pumping station site will add 100 trucks a day to the roads already overloaded with heavy construction services. Earlier suggestions to use huge river barges have been kicked into the long grass in favour of the pre-approved and cheaper option.!

Continuing on our way up to Creek Bridge, we stopped off at the far end of Hiltons Wharf, to step out on to the tiny mooring point in time to catch departure of the Prior’s aggregate ship to Gravesend for a refill. Opposite, the race for construction of two huge towers crashes on, sucking up the entire concrete capacity of Euromix!

After a well deserved coffee at Hoy Kitchen and visit to the steps we were picked up by Camden for a fantastic river trip aboard a motorised lifeboat, which first took us out onto the Thames before returning us to the nest of houseboats at 4 Creekside.

Please take that trip sometime, till then check this video! More photos here.

At the furthest reaches of the tidal creek, Friends of Brookmill Park held their quarterly meeting to map out activities for the rest of spring and early summer. Their re planting program in the formal garden adjacent to Stephen Lawrence centre is proceeding well with fresh lavender beds and new roses. Mariner and beekeeper Julian Kingston will talk about local shipbuilding at a fundraiser event in Brookmill pub on June 7th, space is limited to 30 seats so get your ticket soon!

That’s just a few weeks before the Creeknet Symposium on 20th and 21st June. DIY networks of Deptford Creek host partners from the MAZI project in Germany, Switzerland and Greece to attend this ‘cross pollination event’, all are welcome.

The first day begins at Hoy Kitchen on Creek Road with rolling breakfast welcome, exhibition of local images and stories followed by mass low tide walk at Creekside Discovery Centre at 3pm. The second day starts with project presentations and lunch at Stephen Lawrence Centre, followed by a visit to Redstart Arts and picnic in Brookmill Park. Finally take a walkshop to the Creek Mouth, crossing bridges and exploring the Creeknet trail.