Weblog on the Internet and public policy, journalism, virtual community, and more from David Brake, a Canadian academic, consultant and journalist

Archive for the 'journalism' Category | back to home

23 February 2017

As an media scholar and journalist with an interest in the digital divide, I have long believed that one of the things that media outlets could do a lot better is using their higher profile to give a voice to ‘ordinary people’ who have something to say. I also believe that one of the things that Facebook like other news intermediaries should be trying to do is increase ideological diversity in their feeds. Lastly, I am aware that it is not right to judge what people say online just because it is poorly written technically. And then this happens.

This is my top Facebook recommended story on Facebook’s top trending issue. The story’s summary suggests it is from the Huffington Post, which has some journalistic credibility, but if you visit the story itself and look carefully you will see that it is an un-edited, un-curated self published blog posting.

It is also badly written, bigoted, and dismayingly lacking in concrete, citable facts.

There was a riot of violence and destructions by immigrants in the capitol of Sweden, Stockholm. The police was forced to shoot with ammunition to put and end to it. In Malmö, another city south in Sweden they have struggle with gang violence and lawlessness for years. So when Trump talk about that Sweden have an immigration problem he [Trump] is actually spot on.
It’s well known for Scandinavians and other Europeans that liberal immigration comes with drugs, rapes, gang wars, robbery and violence

(For a more nuanced view of recent events in Sweden, read this.)

In the end, I think that what this underlines is another of my personal tech policy prescriptions – we need to ensure that when technology companies are doing socially important work like influencing what news we see, they do not offload this role onto an unaccountable algorithm. Instead, they should ensure that the algorithms are overseen adequately by humans and that those humans are in turn accountable to the public for their actions.

22 June 2015
Filed under:journalism at10:46 am

I found myself watching Teacher’s Pet, a 1958 romantic comedy about a hard-bitten hack going to journalism school and was struck that even then journalists were arguing over hard news versus analysis. This clip could be worth slipping into your teaching if you are a journalism prof:

“This isn’t a rehash – I’m talking about The Big Why behind the story. This is the function of a newspaper in today’s world. Why, TV and radio announce spot news minutes after it happens. Newspapers can’t compete in reporting what happened any more. But they can and should tell the public why it happened!”

 

26 March 2015

Two recent pieces of news made me think about the issue of timing of news consumed online. Most obviously, online publication pushes journalists to publish ever-faster, but the ability to archive everything means there is also a place for “evergreen” features and explainers. Once done, as long as they are revisited from time to time to ensure they are still relevant, they can continue to draw people to your writing via search, and as a journalism educator I have long encouraged my students to produce and value such pieces.

Shirley Li points out in the Atlantic that even quite old pieces of ‘news’ can end up being recirculated as if they were new. Her concern is that people sometimes don’t realize that online news recirculated this way is outdated (because timestamps on stories can often be hard to find) but this also suggests once again that older news stories/features can also have continued value.

Alarmingly, however, it seems that online advertisers (at least in one case) place very little value on readers’ attention if it was drawn by old material. According to Jim Romenesko, journalists for Forbes magazine (who are paid per click) will be paid only a quarter as much as before for visits to pages that are more than 90 days old. According to a memo passed to Romanesko, “advertisers are increasingly buying premium ads for new content, not old”.

It is unclear why advertisers would necessarily prefer a view of a new story to a view of a similarly interesting and accurate but older story. However if this were part of a larger trend, what would be the implications? Will this encourage editors to superficially refresh even “evergreen” stories to make them “new” for advertisers? (Keeping a closer editorial eye on older stories might be no bad thing). Might this mean that rather than updating old stories, they are deleted or unlinked and new stories based on the old ones will be written (which among other things would complicate site archives and contribute greatly to the problem of “link rot” where links to old journalism vanish)?

13 May 2014


I saw this and was momentarily intrigued. Then I clicked on the pic to see it full size. It didn’t get any bigger and was therefore still unreadable. So I ended up having to go visit the original story at Journalism.co.uk – now the individual text was readable but you couldn’t get a sense of the meaning of the whole without going full-screen to this from Mattermap. And then? All it turns out to be is a grouped collection of tweets, which were all available and more easily readable in the text of the website below anyway. I got there in the end but three clicks, some head-scratching and a scroll later. Sometimes good old-fashioned text is all you need!

12 March 2014
Filed under:Current Affairs (World),journalism at3:54 pm

I’d read and heard about the horrific tsunami that hit Japan three years ago, but none of it moved me in the way this simple podcast eyewitness testimony did. Audio is the most intimate of mediums, and lets the mind fill in its own pictures of the events described which I think are more vivid than any video could be. And unlike many conventional documentaries and news programmes, this 15 minute first person format let the witness’ testimony speak for itself.

Carl Pillitteri, a Fukoshima nuclear engineer told this story at a Moth event (the Moth is a non-profit organization which runs events where people talk about their lives live and without notes).

7 March 2014


If you are using images online as a journalist you need to ensure that you have the rights to put them on your site legally.  If you do a Google image search, click on “search tools” and select “usage rights” that’s one way to ensure what you’re finding you can use, but in addition image libraries like Getty Images contain a lot of very high quality images (> 35m at last count) including pictures relating to the latest news. This is why they can charge for them and put watermarks over the images you can see for free so you don’t pirate them. Now, however, tired of trying to fight the many online pirates of their content, Getty seems to have decided to make it easy for people to use their images online for free in controlled ways with attribution.

They are defining “non-commercial” (and therefore permissible) uses of their images quite broadly so as long as you use their image embedding tool you should be able to legitimately use their many pictures on most journalistic projects online (for print use you would still need to purchase them).  There is already speculation that the other major picture agencies may do likewise. Here’s how to take advantage of Getty Images’ new embed feature (and its limitations).

Getty’s “front page” for searching embeddable images is here.

13 February 2014

I love hearing about the latest digital tools that help one operate as a journalist/researcher whether that be twitter search and monitoring tools, bookmark management tools, people search tools etc. “Search : theory and practice in journalism online” by Dick is particularly good for finding and describing this stuff – but I am not aware of any articles that bring the different pieces together to describe all the key online tools a journalist uses and how they all go together into a work flow. I plan to come up with something myself to share with students and if I do I will post it here but I would love to hear what other people are using.

23 October 2013

It has long been understood by scientists (but not by enough parents) that the amount that children are talked to has a crucial impact on their later educational development so I was pleased to see the New York Times pick this story up. However it rather wastes this opportunity because it is so clumsily written – particularly in its handling of statistics.

The first paragraph is confusing and unhelpful “…by age 3, the children of wealthier professionals have heard words millions more times than those of less educated parents.” Clearly, rich kids don’t hear millions of times more words than poor ones but that might be what you pick up from a quick scan. Further down the story, “because professional parents speak so much more to their children, the children hear 30 million more words by age 3 than children from low-income households”– unfortunately, this is meaningless unless you know how many million words both kinds of children heard overall. The difference is only hinted at near the end of the piece when you finally find out (through a different study) that “some of the children, who were 19 months at the time, heard as few as 670 “child-directed” words in one day, compared with others in the group who heard as many as 12,000″.

Very annoyingly, despite saying the 20 year old study in the first paragraph was a “landmark” there is no link to the study on the website or information to guide readers so they could find it later. The story makes reference to new findings being based on a “small sample” but doesn’t say how small.

Crucially while it seems to suggest that pre-kindergarden schooling could make up for this gap, it presents no evidence for this. Intuitively, to solve this particular problem a big push to get parents to talk to their babies and small children would be much more effective since they spend much more time with them than any educator could.

Ironically there was a much better-explained story on the same issue also from the NYT back in April – but not alas in the print edition.

So Tim could you take this as a reasonable excuse to bring some important research to the public eye, and Motoko (whose work on the future of reading I have liked a great deal) could you go back to the piece online and tidy it up a bit if you get the chance?

6 March 2013
Filed under:Call for help,journalism,Online media at11:35 am

Much of the discussion about which way the journalism industry is doing suggests that freelancing will increase while staff jobs decline (see for example here and Paulussen 2012) but Felix Salmon at Reuters has just written an interesting piece suggesting most online content will be written by staff writers not freelances because online journalist is just too fast and frequent to make sense as a freelance business. His piece was inspired by Nate Thayer who complained recently about being asked to write for a major US magazine for free (for the exposure). The key paragraphs are here:

The exchange has particular added poignancy because it’s not so many years since the Atlantic offered Thayer $125,000 to write six articles a year for the magazine. How can the Atlantic have fallen so far, so fast — to go from offering Thayer $21,000 per article a few years ago, to offering precisely zero now? The simple answer is just the size of the content hole: the Atlantic magazine only comes out ten times per year, which means it publishes roughly as many articles in one year as the Atlantic’s digital operations publish in a week. When the volume of pieces being published goes up by a factor of 50, the amount paid per piece is going to have to go down.
But there’s something bigger going on at the Atlantic, too. Cohn told me the Atlantic now employs some 50 journalists, just on the digital side of things: that’s more than the Atlantic magazine ever employed, and it’s emblematic of a deep difference between print journalism and digital journalism. In print magazines, the process of reporting and editing and drafting and rewriting and art directing and so on takes months: it’s a major operation. The journalist — the person doing most of the writing — often never even sees the magazine’s offices, where a large amount of work goes into putting the actual product together.
The job putting a website together, by contrast, is much faster and more integrated. Distinctions blur: if you work for theatlantic.com, you’re not going to find yourself in a narrow job like photo editor, or assignment editor, or stylist. Everybody does everything — including writing, and once you start working there, you realize pretty quickly that things go much more easily and much more quickly when pieces are entirely produced in-house than when you outsource the writing part to a freelancer. At a high-velocity shop like Atlantic Digital, freelancers just slow things down — as well as producing all manner of back-end headaches surrounding invoicing and the like.
This is an interesting take on the issue but I am afraid it paints an overoptimistic picture of the future of “digital journalism”. It should be remembered that The Atlantic is one of the most successful and most digitally focused of American publications. Felix suggests that, ” it’s much, much easier to get a job paying $60,000 a year working for a website than it is to cobble together $60,000 a year working freelance for a variety of different websites.” I am very sceptical that any but a few of those who work full-time at the profusion of new digital content enterprises or offshoots of existing products will be earning anything like that sum–there’s just too much competition. I would expect many or most “jack of all trades” full-time or near-full-time digital producers will end up being on some form of precarious contract working from home.
Update: Alexis Madrigal, who oversees the Atlantic’s technology channel, has responded to the Thayer affair with a rather gonzo post about their business model and why it leads to ill-paying or unpaid invitations to blog.
I would be most interested in any more solid evidence in this area whether about the incomes and backgrounds of these new digital journalists or about the casualisation of journalism more generally.

Paulussen, S. (2012). Technology and the Transformation of News Work: Are Labor Conditions in (Online) Journalism Changing? In E. Siapera & A. Veglis (Eds.), The handbook of global online journalism. Chichester: John Wiley

20 December 2012

Given the huge amount of data now available online, I am having great difficulty persuading my journalism students of the value of looking elsewhere (for example a library). One way to do so I thought might be to show them how little of what has been written in the pre and early web era is currently available online. I don’t have a good source of data to hand about this so I just put together this graph pulling figures out of my head– can anyone volunteer a better source of data for this? Someone from Google Books perhaps? [Update – Jerome McDonough came up with a great response which I have pasted below this graph]

If the question is restated as what percentage of standard, published books, newspapers and journals are not available via open-access on the web, the answer is pretty straightforward: an extremely small percentage.  Some points you can provide your students:

* The Google Books Project has digitized about 20 million volumes (as of last March); they estimate the total number of books ever published at about 130 million, so obviously the largest comprehensive scanning operation for print has only handled about 15% of the world’s books by their own admission.

* The large majority of what Google has scanned is still in copyright, since the vast majority of books are still in copyright — the 20th century produced a huge amount of new published material.  An analysis of library holdings in WorldCat in 2008 showed that about 18% of library holdings were pre-1923 (and hence in the public domain).  Assuming similar proportions hold for Google, they can only make full view of texts available for around 3.6 million books.  That’s a healthy number of books, but obviously a small fraction of 130 million, and more importantly, you can’t look at most of the 20th century material, which is going to be the stuff of greatest interest to journalists.  You might look at the analysis of Google Books as a research collection by Ed Jones (http://www.academia.edu/196028/Google_Books_as_a_General_Research_Collection) for more discussion of this.  There’s also an interesting discussion of rights issues around the HathiTrust collection that John Price Wilkin did you might be interested in : http://www.clir.org/pubs/ruminations/01wilkin [I wonder what the situation is like for Amazon’s quite extensive “Look inside the book” programme?]

As for newspapers, I think if you look at the Library of Congress’s information on the National Digital Newspaper Program at http://chroniclingamerica.loc.gov/about/ you’ll see a somewhat different problem. LC is very averse to anything that might smack of copyright violation, so the vast majority of its efforts are focused on digitization of older, out-of-copyright material.  A journalist trying to do an article on news-worthy events of 1905 in the United States is going to find a lot more online than someone trying to find information about 1993.

Now, the above having been said, a lot of material is available *commercially* that you can’t get through Google Books or library digitization programs trying to stay on the right side of fair use law in the U.S.  If you want to pay for access, you can get at more.  But even making that allowance, I suspect there is more that has never been put into digital format than there is available either for free or for pay on the web at this point.  But I have to admit, trying to get solid numbers on that is a pain.

[Thanks again to Jerome, and thanks to Lois Scheidt for passing my query on around her Library Science friends…]

Next Page ?