Weblog on the Internet and public policy, journalism, virtual community, and more from David Brake, a Canadian academic, consultant and journalist

Archive for the 'Tech Policy Issues' Category | back to home

23 February 2017

As an media scholar and journalist with an interest in the digital divide, I have long believed that one of the things that media outlets could do a lot better is using their higher profile to give a voice to ‘ordinary people’ who have something to say. I also believe that one of the things that Facebook like other news intermediaries should be trying to do is increase ideological diversity in their feeds. Lastly, I am aware that it is not right to judge what people say online just because it is poorly written technically. And then this happens.

This is my top Facebook recommended story on Facebook’s top trending issue. The story’s summary suggests it is from the Huffington Post, which has some journalistic credibility, but if you visit the story itself and look carefully you will see that it is an un-edited, un-curated self published blog posting.

It is also badly written, bigoted, and dismayingly lacking in concrete, citable facts.

There was a riot of violence and destructions by immigrants in the capitol of Sweden, Stockholm. The police was forced to shoot with ammunition to put and end to it. In Malmö, another city south in Sweden they have struggle with gang violence and lawlessness for years. So when Trump talk about that Sweden have an immigration problem he [Trump] is actually spot on.
It’s well known for Scandinavians and other Europeans that liberal immigration comes with drugs, rapes, gang wars, robbery and violence

(For a more nuanced view of recent events in Sweden, read this.)

In the end, I think that what this underlines is another of my personal tech policy prescriptions – we need to ensure that when technology companies are doing socially important work like influencing what news we see, they do not offload this role onto an unaccountable algorithm. Instead, they should ensure that the algorithms are overseen adequately by humans and that those humans are in turn accountable to the public for their actions.

1 March 2016
Filed under:E-democracy,E-government,Net politics at9:50 am

McGill student vote mob 2011

Canada is the latest Western country to find that the youth vote, long thought to be in terminal decline, has in fact been rising. In the run-up to their 2015 elecion there was considerable speculation that Canadian young people had turned out to vote, but proof of the scale of the change has just emerged. Two thirds of 18-24 year olds there voted, compared with 55% in the 2011 election and as low as 35% in the 2004 election (the first where reliable statistics were gathered). This is still, of course, lower than the 77% of the voting age overall who voted (fully 86% of 65-74 year olds voted for example) but is further evidence that there may at last be a shift away from a worrying trend in many western countries of youth disengagement from electoral politics. Similar trends towards higher youth engagement were also visible in the UK as well, in its 2015 election. Nearly six in ten people between 18 and 24 voted then, compared to 52% in 2010 and just 38% in 2005.

The reasons for this apparent shift are not yet clear. Some have suggested that increased use of social media by politicians is helping. Surveys in America after the 2008 election suggested that because young people were more likely to be online, they were also thereby more likely to engage in electorally-relevant online activity.

There has certainly been plenty of social media activity by politicians across the world, though it is hard to pinpoint any social media-led event or talking point that had an impact during the election itself in Canada. Indeed, the most visible impact social media had in the Canadian election is arguably a negative one. Misbehaviour by some candidates – many of them young – on social media led to 12 candidates being withdrawn and at least as many being criticized after the exposure. While it is important that candidates be held accountable for their political views, many of the blunders seem to have been due to jokes in poor taste or intemperate language in their postings rather than deeply held abhorrent beliefs. As I have argued in my book, “Sharing Our Lives Online: Risks and Exposure in Social Media“, there is a real danger that young people will rule themselves out of involvement in electoral politics fearing the exposure of their online pasts by other politicians or journalists.

In the UK if anything the problem seems to be the opposite – despite hundreds of Facebook and Twitter postings by parties during last year’s election, their impact on young people may have been limited because, as Darren Lilleker remarks, “it remains largely a broadcasting tool… they [the parties] use Facebook and Twitter in similar ways to push out messages rather than communicating with their supporters.” Instead of being too personal, they may be not personal enough to really engage with young people.

A literature review by Samara, a Canadian think-tank, suggests finding ways for young people to be involved with the parties they identify with short of full membership could be one way to increase their engagement with formal politics. Reducing the voting age to 16 has been mooted as a way to get young people thinking politically sooner, though making this possible for voting in the UK in EU elections for example was recently rejected by the House of Lords). Of course there is a simple, “brute force” solution, espoused by Martin Wattenberg in his US study of youth disengagement – make voting mandatory. In Canada, the governing Liberal party has said it will be considering this option in its electoral reform programme.

The real answer may be more simple – as Samara found, young people are active in conventional politics when they are contacted directly by parties and party members. Because historically they vote in lower numbers (and because they are harder to target, being mobile and often lacking landline phones) politicians tend to focus their energies and their policies on older people. Some recent anti-establishment politicians on the left like Jeremy Corbyn in the UK and Bernie Sanders in the US have made much of their connections with young people. But it is not clear there are enough young potential voters to enable them to break through and bring the concerns of the young into mainstream politics.

26 March 2015

Two recent pieces of news made me think about the issue of timing of news consumed online. Most obviously, online publication pushes journalists to publish ever-faster, but the ability to archive everything means there is also a place for “evergreen” features and explainers. Once done, as long as they are revisited from time to time to ensure they are still relevant, they can continue to draw people to your writing via search, and as a journalism educator I have long encouraged my students to produce and value such pieces.

Shirley Li points out in the Atlantic that even quite old pieces of ‘news’ can end up being recirculated as if they were new. Her concern is that people sometimes don’t realize that online news recirculated this way is outdated (because timestamps on stories can often be hard to find) but this also suggests once again that older news stories/features can also have continued value.

Alarmingly, however, it seems that online advertisers (at least in one case) place very little value on readers’ attention if it was drawn by old material. According to Jim Romenesko, journalists for Forbes magazine (who are paid per click) will be paid only a quarter as much as before for visits to pages that are more than 90 days old. According to a memo passed to Romanesko, “advertisers are increasingly buying premium ads for new content, not old”.

It is unclear why advertisers would necessarily prefer a view of a new story to a view of a similarly interesting and accurate but older story. However if this were part of a larger trend, what would be the implications? Will this encourage editors to superficially refresh even “evergreen” stories to make them “new” for advertisers? (Keeping a closer editorial eye on older stories might be no bad thing). Might this mean that rather than updating old stories, they are deleted or unlinked and new stories based on the old ones will be written (which among other things would complicate site archives and contribute greatly to the problem of “link rot” where links to old journalism vanish)?

13 May 2014

I saw this and was momentarily intrigued. Then I clicked on the pic to see it full size. It didn’t get any bigger and was therefore still unreadable. So I ended up having to go visit the original story at Journalism.co.uk – now the individual text was readable but you couldn’t get a sense of the meaning of the whole without going full-screen to this from Mattermap. And then? All it turns out to be is a grouped collection of tweets, which were all available and more easily readable in the text of the website below anyway. I got there in the end but three clicks, some head-scratching and a scroll later. Sometimes good old-fashioned text is all you need!

22 January 2014

I’m as excited as anyone about the potential for organizations and governments to use the ever-increasing amounts of data we’re ‘sharing’ (I prefer the less value-laden ‘giving off’) because of our love of smartphones and the like. So I enjoyed this presentation by Tom Raftery about “mining social media for good”.

(Slideshare ‘deck’ here)

And I am sure his heart is in the right place, but as I read through the transcript of his talk a few of his ‘good’ cases started to seem a little less cheering.

Waze, which was recently bought by Google, is a GPS application, which is great, but it’s a community one as well. So you go in and you join it and you publish where you are, you plot routes.

If there are accidents on route, or if there are police checkpoints on route, or speed cameras, or hazards, you can click to publish those as well.

Hm – avoid accidents and hazards sure – but speed cameras are there for a reason, and I can see why giving everyone forewarning of police checkpoints might not be such a hot idea either.

In law enforcement social media is huge, it’s absolutely huge. A lot of the police forces now are actively mining Facebook and Twitter for different things. Like some of them are doing it for gang structures, using people’s social graph to determine gang structures. They also do it for alibis. All my tweets are geo-stamped, or almost all, I turned it off this morning because I was running out of battery, but almost all my tweets are geo-stamped. So that’s a nice alibi for me if I am not doing anything wrong.

But similarly, it’s a way for authorities to know where you were if there is an issue that you might be involved in, or not.

To be fair Tom does note that this is “more of a dodgy use” than the others. And what about this?

A couple of years ago Nestlé got Greenpeace. They were sourcing palm oil for making their confectionery from unsustainable sources, from — Sinar Mas was the name of the company and they were deforesting Indonesia to make the palm oil.

So Greenpeace put up a very effective viral video campaign to highlight this […] Nestlé put in place a Digital Acceleration Team who monitor very closely now mentions of Nestlé online and as a result of that this year, for the first time ever, Nestlé are in the top ten companies in the world in the Reputation Institute’s Repute Track Metric.

Are we talking about a company actually changing its behaviour here or one using their financial power to drown out dissent?

You should definitely check out this talk and transcript and if we’re going to have all this data flowing around about us it does seem sensible to use some of it for good ends – there are certainly many worthy ideas outlined in it. But if even a presentation about the good uses of social media data mining contains stuff that is alarming, maybe we should be asking the question more loudly whether the potential harms outweigh these admitted goods?

28 March 2013

Like many a tech-savvy parent I am trying to divert my kid’s gaming attention towards Minecraft – and with some success. There’s a ‘legacy’ iBook G4 he can use but getting the program to run at all was difficult and now that it is running, I have found it runs unusably slowly, even with all the graphical options I could find turned down (and with non-working sound). This to run a game that is deliberately designed to look ‘retro’ and which I imagine could have worked on a Mac LC c. 1990 if suitably coded! Since it’s a very popular game with a hyperactive development community I thought there was bound to be a way to make things work better. Alas, nothing I tried (Magic Launcher launching Optifine Light mainly) seemed to work and it took me several hours of forum reading, installation and tweaking to get this far.

It’s not a new observation but what makes older machines like my nine-year-old macbook obsolete does not actually seem to be the speed or capability of the underlying hardware but the steady ratcheting up of the assumptions that software makes. Somewhere (presumably in Java, which is Minecraft’s ‘environment’) I’m guessing there’s a whole load of un-necessary code that has been added in the last nine years which has dragged what should be a perfectly usable game down to a useless speed.

Just to drag this back to academic relevance for a moment, this is to my mind a good example of how the structure of the computer industry aggravates digital divides by gradually demanding users ‘upgrade’ their software to the point that their machines stop working, well before the end of their ‘natural’ lives.

PS If anyone has managed to get Minecraft working adequately on a Mac of similar vintage please share any tips…

12 September 2012

I’m all in favour of attempts like that of the World Wide Web Foundation to make in their words “multi-dimensional measures of the Web’s growth, utility and impact on people and nations” but to call it the “first” such attempt would seem to be overlooking the strikingly similar ITU “Measuring the Information Society” programme or The World Economic Forum’s “Network Readiness Index” (there are and have been probably others too). There’s plenty of room for all though and each group of scholars has something to contribute (indeed the Web Index draws from ITU figures among others). If you are interested in the digital divide, check them all out!

27 August 2012

One of the chapters of my forthcoming book, “Sharing Our Lives Online: Risks and Exposure in Social Media” is devoted to the question “What is risky and who is at risk?” and in answering this question the best resource I have consulted by some distance is Livingstone, S., Haddon, L., Gorzig, A., & Olafsson, K. (2011). Risks and safety on the internet: the perspective of European children: full findings. It combines the findings of a survey of 25,142 (!) children 9-16 across Europe with a measured, thoughtful review of the research of others. Parents and policy-makers who don’t want or need all the 167 pages of evidence should download EU Kids Online: Final Report and pay particular attention to pages 42-46 which debunk the top 10 myths of online safety and set out some clear recommendations. Here are a few things I have noted, based on my interests and approach:

The survey found that 59% of all European children surveyed have social network profiles, including 26% of 9-10 year olds and 49% of 11-12 year olds (though a proportion of these will be on social networks where under-13s are allowed like Club Penguin). (p. 36-37)

The survey looked at children’s use of privacy settings but (presumably because of lack of space on the very extensive survey) in a fairly blunt fashion. It asked them whether their profiles were public, “partly private” (visible to friends of friends) or private. How concerned you are about what they reveal may depend on how you perceive “partly private”.

From Risks and Safety on the Internet p. 38

Research published by scholars working with Facebook (Ugander et al, 2011) noted that “partially private” users with the average number of friends (100) would have on average 27,500 friends of friends able to view their profiles.

This research also does not evaluate how accurate the respondents’ assessments really are of how well their profiles are protected. The only study I am aware of that compared what people wanted to share on Facebook with what they were actually sharing (Majedski, 2011) found no fewer than 93.8% of participants revealed some information that they did not want disclosed. This is consistent with the earlier qualitative findings of (Livingstone, 2008) who found on interviewing teenagers, “When asked, a fair proportion of those interviewed hesitated to show how to change their privacy settings, often clicking on the wrong options before managing this task, and showing some nervousness about the unintended consequences of changing settings” (p. 406).

On the other hand, the survey does not give much guidance about just how risky letting out public information actually is for young people. They say, “Research thus far has proved contradictory about whether SNSs are more or less risky than instant messaging, chat, or other online communication formats, and it is as yet unclear whether risks are ‘migrating’ from older formats to SNSs” (p. 36) but their list of risks is rather vague – ‘flaming’, hacking and harassment – and the only paper they cite about these risks is (Ybarra & Mitchell, 2008) whose scope just covers harassment and sexual solicitation and which seemed rather more unambiguous than the EU Kids Online report suggests. It concluded “broad claims of victimization risk, at least defined as unwanted sexual solicitation or harassment, associated with social networking sites do not seem justified” – though the situation may have changed in the six years since the Ybarra & Mitchell survey.

It is perhaps notable that while online bullying was found to be rare – 6% of young people experienced it in the last year (p. 63) – it is also most often encountered on social network sites (half of all bullying encounters).

It’s unfortunate that the focus of the report (on “the internet”) means it doesn’t cover mobile-phone based risks unless they came via the internet (bullying, ‘sexting’ and other problematic behaviour may be digitally circulated on mobiles but not using the internet).

My biggest problem with the report, however (and one of my motivations to do my book) is that the definition of potential risks in the survey is too narrow. In focusing on the obvious short term issues it overlooks some of the longer term risks of internet use including but not limited to:

  1. Employment harm (“why were you drunk all the time at university?”)
  2. Relationship harms (when your grandmother ‘meets’ your girlfriend online)
  3. Harms from an unanticipated future (“I can’t believe you actually boasted about having a petrol-guzzling car back in the 90s”)
  4. identity theft
  5. Locational crime (you check in at the restaurant, a thief checks out your TV)
  6. The harvesting of personal data for targeted marketing (and possibly ‘redlining’ and exclusion from access to financial products)
  7. Government surveillance using (flawed) risk assessment criteria (one of your 22,000 friends of friends turns out to be a terrorist so you go on a watch list).

I may share more about research I run across that tackles some of these areas in future blog posts. Meanwhile, I would be interested in what you think of this post and (if you’re a researcher) please suggest studies you think do a good job of measuring problems 1-7.

Oh, and perhaps my biggest problem with this report (but one the authors can hardly be blamed f0r) – in common with most internet risk literature it studies only children and teenagers. I would like to redress the balance by noting that many of the problems above will be encountered by adults as well. (So studies about these risks that cover older people would be particularly welcome).

6 August 2012

Evgeny Morozov has recently delivered a scathing (and funny) dissection of a collection of TED ebooks, including most prominently one by Parag and Ayesha Khanna. Leaving aside the superficiality of the ideas he mocks (I have not read the works in question) he points out something rather more disturbing in their work – the anti-democratic streak that appears to run through it eg:

We cannot be afraid of technocracy when the alternative is the futile populism of Argentines, Hungarians, and Thais masquerading as democracy. It is precisely these nonfunctional democracies that are prime candidates to be superseded by better-designed technocracies—likely delivering more benefits to their citizens…. To the extent that China provides guidance for governance that Western democracies don’t, it is in having “technocrats with term limits.

It gets worse though – after the publication of Morozov’s critique, Vishrut Arya found an interview with Ayesha wherein she reflects on the exciting possibilities that augmented reality glasses would enable people who didn’t like homeless people to simply delete them from their sight. When I read this I assumed it was meant by her as some kind of warning but on listening she follows this with “…so now we have enhanced our basic sense”.

I am not surprised to find TED giving credibility to this kind of pundit – I am, however, disturbed and disappointed to see that my alma mater, the LSE, giving her a platform by making her director of their Future Cities Group (while she finishes her PhD there). Seems like another potential Said Ghaddafi embarrassment in the making. Certainly Beatrice and Sidney Webb would be turning in their graves!

8 March 2012
Filed under:Academia,Privacy,social media at4:38 pm

The excellent folks at the Pew Internet and American Life Project have recently released an update of their 2009 report on reputation management and privacy attitudes among US internet users. The ‘top line summary’ says, “Social network users are becoming more active in pruning and managing their accounts” but I would be cautious about suggesting that from the data. True, 63% of them have deleted people from their “friends” lists, up from 56% in 2009 and 44% have deleted comments made by others on their profile, up from 36% in 2009 but since these are measures of “have ever done” one would expect figures to have risen given more than two years have passed.

It’s worth noting that from the report that (consistent with other research) young and old have the same likelihood to set their profiles to be private.

Next Page ?