MozFest 2016 Poster "One Web"

Friday night is science night – MozFest16

Reading Time: 3 minutes

So this weekend I am at MozFest for the third year in a row. I debated whether to come this year – travelling to London isn’t my favourite thing, even though the venue at Ravensbourne is an excellent place to hold an event. However, I’m here and I managed to get a good early flight down too so I wasn in plenty of time for the Friday night Science Fair. As always I arrived tired and looking forward to checking into my hotel and I left vibrating with energy and really enthused about the next few days (and feeling like I have no idea how to be in 5 places at once to cover all the cool sessions I’ve heard about).

Things I learned about this evening, in no particular order:

Google Expeditions. This is a Google Cardboard project and is designed for educational settings. Someone in charge has a tablet running an app which controls what VR content is displayed. ‘Students’ have an app on their phone which displays the VR content, and use a Google Cardboard headset to view it. The person with the tablet can direct viewers to focus on various parts of the scene they are looking at using some simple annotation tools. Content needs to be a 360 panorama video and can be uploaded into the tablet app. I can see lots of potential for virtual fieldtrip type applications – one to do some more digging into and share with colleagues.

I met a lovely woman from Localisations Lab who explained to me the work her organisation were doing crowdsourcing translations of security and privacy tools into other languages. We often forget how dominant just a few languages on the web are. Privacy and security on the web are for everyone.

I saw a neat project from the Internet Archive people – the Political Ad TV Archive.

The Political TV Ad Archive is a project of the Internet Archive. This site provides a searchable, viewable, and shareable online archive of 2016 political TV ads, married with fact-checking and reporting citizens can trust.

(https://politicaladarchive.org/about/)

This project was making it possible to see what messages are getting picked up and amplified through re-broadcasting of political ads. It allowed most often repeated stuff to float to the top and gives a whole new insight into impact. Fascinating and something that I should let our The Making of a U.S. President MOOC people know about.

I met some of the Knight-Mozilla fellows who are working on Open News projects. In particular I learned about GuriVR – a tool for building a virtual reality experiences without needing any code. This was seriously impressive stuff.

For the first time I’m seeing a few practical applications of LiFi including one person showcasing a setup that could be used to extend high speed internet into multiple buildings using a single fibre connection and then a series of LiFi transmitters and receivers. That’s a lot like the way the satelitte community broadband schemes worked a few years ago using WiFi technology. It doesn’t seem that long ago that I watched Prof Harald Haas’ TED talk. Yet again I’m reminded that working at the University of Edinburgh is a privilege. There are smart people doing world changing stuff there.

I picked up a few pieces of information that are all loosely connection to the premise that the Freedom of Information Act is a way of creating open data and is also a seriously useful journalistic tool. I spotted Alaveteli – a FOI management dashboard that helps you track your requests and also makes public information the information gathered from Freedom of Information requests.

I had a really interesting chat with a chap from the Databox project and if I can get along to one of the hacking events I will.

The Databox envisions an open-source personal networked device, augmented by cloud-hosted services, that collates, curates, and mediates access to an individual’s personal data by verified and audited third party applications and services. (http://www.databoxproject.uk/)

I also paid a visit to the BBC News Labs stand and saw a bit more of the work they’ve been doing with translation of news and use of synthetic voices. It’s still kind of obvious that it’s not a real person speaking, but it’s not so jarring as to be off putting and if it allows news content to be made available in many more languages much more quickly that seems like a good thing. I was amused to hear BBC folks say that their auto-transcription software was pretty impressive – almost at 80% accuracy, but still has problems with specialist words, accents etc. This is a hot topic back at work with colleagues thinking about their video content and how to easily obtain transcripts. The technology is coming, but it’s not there yet and even the best broadcasters in the land aren’t doing a lot better than we can offer to colleagues. It’s reassuring to know we’re not lagging behind or missing a trick.

 

(CC0. This image was taken by me and no rights are reserved.)

Leave a Reply

Your email address will not be published. Required fields are marked *