SIDEBAR
»
S
I
D
E
B
A
R
«
Vintage Infographics From the 1930s
Sep 11th, 2009 by analyticjournalism

Nathan, over at FlowingData, has posted a fine example of infographics.  The work of Willard C. Brinton is a nice extension of what was being done by U.S. government agencies.  Turns out, Brinton's book can be found in used book sites, and at an affordable price.

Vintage Infographics From the 1930s

Posted by Nathan / Sep 11, 2009 to Infographics / 8 comments

Vintage Infographics From the 1930s

Someone needs to get me a paper copy of Willard Cope Brinton's Graphic Presentation (1939), because it is awesome.

Brinton discusses various forms of graphic presentation in the 524-page book and what works and what doesn't. There's also some good stuff in there about how to make your graphs, charts, maps, etc (by hand).

Have we seen these?

The most interesting part is that many of the graphics – despite having no computers in 1939 – look a lot like what we have today. Albeit, they're a little rougher because they're made by hand, but that's just added flavor.

For example, you've got the Sankey diagram above, or a “cosmograph” as Brinton calls it. The instructions read:

One thousand strips of paper are set on edge to represent 100% and are separated into component parts of 100%.

What? You want me to arrange 1,000 strips of paper to make my diagram? Brilliant, I say.

Here are your choropleth maps…

choropleth

network diagram…

network

and of course some of your usual suspects…

time-series

The entire book is freely available in PDF format, but it's low resolution and takes forever to browse. Michael Stoll has posted some higher quality shots on Flickr.

I still want more though.

Seriously, does anyone know where I can get a copy?

[via Datavisualization.ch]

Like what you see? Subscribe to the FlowingData RSS feed to stay updated on what's new in data visualization.


 

Mary Ellen Bates on "Google Squared"
Aug 25th, 2009 by analyticjournalism

Mary Ellen Bates offers up this good tip on “Google Squared” at

Bates Information Services,  www.BatesInfo.com/tip.html ________________________________________________________________________________________

August 2009

Google Squared

Google Labs — the public playground where Google lets users try out new products or services that aren't yet ready for prime time — is my secret weapon for learning about cool new stuff. My favorite new discovery in Google Labs is Google Squared. It's a demonstration of a search engine trying to provide answers instead of just sites, and at a higher level than the simple “smart answers” you see when you search for “time in Rome” or “area code 909”. Rather, Google analyzes the retrieved pages, identifies common elements, and creates a table with the information it has compiled.

This is a fascinating tool that helps you compile facts into tables that Google builds on the fly. Hard to describe, easier to show. Go to Google.com/squared and type in a query that will retrieve a number of similar things — organic farms in Colorado, for example, or women CEOs… even superhero powers.

Google Squared generates a table of facts extracted from its index, with the items you are searching for as the left-most column, along with columns for whatever related characteristics are relevant for the topic. For organic farms in Colorado, for example, the table in the search results has columns for the name of the company, an image from the farm's web site, a snippet of description about the farm, and columns for telephone number, location and “season.” Note that some of these columns may have few entries in them, depending on what information Google analyzed. For women CEOs, the table includes the CEO's name, a photo, a snippet that indicates what her position is, her date of birth, and her nationality. For superhero powers, you will find the superhero's name, a photo, a far-too-brief description of said superhero, the hero's first appearance (in print, that is), publisher and even the hero's “abilities”.

Interestingly, you can insert your own items in a Google Squared table, and either let Google populate the rest of the row or type in whatever content you want in that row. I added Catwoman to my superheroes table and Google filled in the new row with her photo and description; I could provide the rest of the info. For some tables, Google even suggests additional columns. For my superheroes table, I could add columns for Aliases, Alter Ego, Profession (the Joker is a lawyer, of course), and so on. You can add your own columns, as well.

You can also delete a row or column that isn't relevant to your search. If you log in to your Google account, you can save your customized tables for later use. And you can export the table into Excel (the images are exported as URLs).

Google Squared is never going to compete with a real human's analysis of a collection of facts, but it can be a great way to start brainstorming, as a quick way to organize the results of your search, and as a starting point for a nicely-presented deliverable for your client.


“May I publish or reproduce this InfoTip?” Be my guest! Just make sure you credit the source, Bates Information Services, and include the URL, www.BatesInfo.com/tip.html

SNA in R Talk, Updated with [Better] Video
Aug 20th, 2009 by analyticjournalism

OK, OK.  Using R can be a steep hill to climb for some.  But here, thanks to O”Reilly Radar, is a pretty good video of a presentation on using R as a Social Network Analysis tool.

 “Social Network Analysis in R — video and slides for talk on doing social network analysis with R.”

SNA in R Talk, Updated with [Better] Video

Update II: It occurred to me that it would be much better for people to be able to view the entire talk in a single video, rather than having to switch between sections; therefore, I uploaded the whole thing to Vimeo.

Tonight I will be givingOn August 6th I gave a talk at the New York City R Meetup on how to perform social network analysis in R using the igraph package. Below are the slides I will be going over covered during the talk, and all of the code examples from the presentation are available in the ZIA Code Repository in the R folder.

Below is a video of this talk, with a link to the slides I review during the presentation. If you are interested, I suggest downloading the slides and following along with videos while having the slides open, as much of what is on the screen in the video is hard to read.

 

Social Netowork Analysis in R from Drew Conway on Vimeo.

Andrew Little’s presentation on econometrics in R using Zelig and MatchIt are also available on YouTube starting here. I hope you enjoy the presentation, and please let me know if you have any questions or comments.


 

 

Can Game Theory Predict When Iran Will Get the Bomb?
Aug 16th, 2009 by analyticjournalism

Good NYTimes profile of NYU/Hoover Institute professor Bruce Bueno de Mesquita, who has spent 40+ years developing predictive models of socio-political activity. (Also a nice bit of promo for “The Predictioneer’s Game,” Bueno de Mesquita's book scheduled to come out next month.)

“Of course, a somewhat high profile always proves to be an attractor.  For example, see “The New Nostradamus.” 

“Can a fringe branch of mathematics forecast the future? A special adviser to the CIA, Fortune 500 companies, and the U.S. Department of Defense certainly thinks so.

“If you listen to Bruce Bueno de Mesquita, and a lot of people don’t, he’ll claim that mathematics can tell you the future. In fact, the professor says that a computer model he built and has perfected over the last 25 years can predict the outcome of virtually any international conflict, provided the basic input is accurate. What’s more, his predictions are alarmingly specific. His fans include at least one current presidential hopeful, a gaggle of Fortune 500 companies, the CIA, and the Department of Defense. Naturally, there is also no shortage of people less fond of his work. “Some people think Bruce is the most brilliant foreign policy analyst there is,” says one colleague. “Others think he’s a quack.”

Still, we think the articles and approach are well-worth your reading time.


 

 

"Distributed data analysis"? Potentially.
Aug 3rd, 2009 by analyticjournalism

FYI from O'Reilly Radar

And does this suggest possibility of something like “distributed data analysis” whereby a number of widely scattered watchdogs could be poking into the same data set?  If so, raises interesting questions for journalism educators: who is developing the tools to manage such investigations?

Enabling Massively Parallel Mathematics Collaboration — Jon Udell writes about Mike Adams whose WordPress plugin to grok LaTeX formatting of math has enabled a new scale of mathematics collaboration.

http://blog.jonudell.net/2009/07/31/polymath-equals-user-innovatio/

===============================================

In February 2007, Mike Adams, who had recently joined Automattic, the company that makes WordPress, decided on a lark to endow all blogs running on WordPress.com with the ability to use LaTeX, the venerable mathematical typesetting language. So I can write this:

$latex \pi r^2$

And produce this:

\pi r^2

When he introduced the feature, Mike wrote:

Odd as it may sound, I miss all the equations from my days in grad school, so I decided that what WordPress.com needed most was a hot, niche feature that maybe 17 people would use regularly.

A whole lot more than 17 people cared. And some of them, it turns out, are Fields medalists. Back in January, one member of that elite group — Tim Gowers — asked: Is massively collaborative mathematics possible? Since then, as reported by observer/participant Michael Nielsen (1, 2), Tim Gowers, Terence Tao, and a bunch of their peers have been pioneering a massively collaborative approach to solving hard mathematical problems.

Reflecting on the outcome of the first polymath experiment, Michael Nielsen wrote:

The scope of participation in the project is remarkable. More than 1000 mathematical comments have been written on Gowers’ blog, and the blog of Terry Tao, another mathematician who has taken a leading role in the project. The Polymath wiki has approximately 59 content pages, with 11 registered contributors, and more anonymous contributors. It’s already a remarkable resource on the density Hales-Jewett theorem and related topics. The project timeline shows notable mathematical contributions being made by 23 contributors to date. This was accomplished in seven weeks.

Just this week, a polymath blog has emerged to serve as an online home for the further evolution of this approach.


 

"The Devil is in the Digits"? No, I'd say they abound in the comments.
Jun 23rd, 2009 by analyticjournalism

An intriguing op-ed in The Washington Post on Saturday (June 20, 2009) claimed to spot fraud in the Iran elections by applying some analytic methods basically drawn from Benford's Law.  Yes, read the article, but be sure to drill down into the 140+ comments.  Most quite cogent and well argued.

The Devil Is in the Digits

Since the declaration of Mahmoud Ahmadinejad's landslide victory in Iran's presidential election, accusations of fraud have swelled. Against expectations from pollsters and pundits alike, Ahmadinejad did surprisingly well in urban areas, including Tehran — where he is thought to be highly unpopu…By Bernd Beber and Alexandra Scacco

 


Rise of the Data Scientist
Jun 4th, 2009 by analyticjournalism

Nathan, the chap who curates the valuable blog Flowing Data, offers up a bit of hope for journalists who are worried about their employment futures and yet have invested in learning methods of data analysis.  When thinking about re-inventing ourselves, consider the phrase “data scientist.”

Rise of the Data Scientist

Posted by Nathan / Jun 4, 2009 to Data Design Tips, Statistics / 6 comments



Photo by majamarko

As we've all read by now, Google's chief economist Hal Varian commented in January that the next sexy job in the next 10 years would be statisticians. Obviously, I whole-heartedly agree. Heck, I'd go a step further and say they're sexy now – mentally and physically.

However, if you went on to read the rest of Varian's interview, you'd know that by statisticians, he actually meant it as a general title for someone who is able to extract information from large datasets and then present something of use to non-data experts.

Sexy Skills of Data Geeks

As a follow up to Varian's now-popular quote among data fans, Michael Discroll of Dataspora, discusses the three sexy skills of data geeks. I won't rehash the post, but here are the three skills that Michael highlights:

  1. Statistics – traditional analysis you're used to thinking about
  2. Data Munging – parsing, scraping, and formatting data
  3. Visualization – graphs, tools, etc.

Oh, but there's more…

These skills actually fit tightly with Ben Fry's dissertation on Computational Information Design (2004). However, Fry takes it a step further and argues for an entirely new field that combines the skills and talents from often disjoint areas of expertise:

  1. Computer Science – acquire and parse data
  2. Mathematics, Statistics, & Data Mining – filter and mine
  3. Graphic Design – represent and refine
  4. Infovis and Human-Computer Interaction (HCI) – interaction

And after two years of highlighting visualization on FlowingData, it seems collaborations between the fields are growing more common, but more importantly, computational information design edges closer to reality. We're seeing data scientists – people who can do it all – emerge from the rest of the pack.

Advantages of the Data Scientist

Think about all the visualization stuff you've been most impressed with or the groups that always seem to put out the best work. Martin Wattenberg. Stamen Design. Jonathan Harris. Golan Levin. Sep Kamvar. Why is their work always of such high quality? Because they're not just students of computer science, math, statistics, or graphic design.

They have a combination of skills that not just makes independent work easier and quicker; it makes collaboration more exciting and opens up possibilities in what can be done. Oftentimes, visualization projects are disjoint processes and involve a lot of waiting. Maybe a statistician is waiting for data from a computer scientist; or a graphic designer is waiting for results from an analyst; or an HCI specialist is waiting for layouts from a graphic designer.

Let's say you have several data scientists working together though. There's going to be less waiting and the communication gaps between the fields are tightened.

How often have we seen a visualization tool that held an excellent concept and looked great on paper but lacked the touch of HCI, which made it hard to use and in turn no one gave it a chance? How many important (and interesting) analyses have we missed because certain ideas could not be communicated clearly? The data scientist can solve your troubles.

An Application

This need for data scientists is quite evident in business applications where educated decisions need to be made swiftly. A delayed decision could mean lost opportunity and profit. Terabytes of data are coming in whether it be from websites or from sales across the country, but in an area where Excel is the tool of choice (or force), there are limitations, hence all the tools, applications, and consultancies to help out. This of course applies to areas outside of business as well.

Learn and Prosper

Even if you're not into visualization, you're going to need at least a subset of the skills that Fry highlights if you want to seriously mess with data. Statisticians should know APIs, databases, and how to scrape data; designers should learn to do things programmatically; and computer scientists should know how to analyze and find meaning in data.

Basically, the more you learn, the more you can do, and the higher in demand you will be as the amount of data grows and the more people want to make use of it.


 

"Interaction Design Pilot Year Churns Out Great Student Projects"
May 9th, 2009 by analyticjournalism

Another interesting post from “FlowingData

Interaction Design Pilot Year Churns Out Great Student Projects

In a collaborative initiative between Copenhagen Institute of Interaction Design and The Danish Design School, the Interaction Design Pilot Year brings together students and faculty from various disciplines for a unique brand of education.

The Interaction Design Pilot Year is a full-time intense curriculum that includes a number of skills-based modules (such as video prototyping and computational design), followed by in-depth investigations in to graphical/tangible interfaces and service design.

The end result? A lot of great work from some talented and motivated students. There are a number of project modules, but naturally, I'm most drawn to the interactive data visualization projects. Here are a few of the projects.

>Find much more in the student gallery.”


 

 

Craig's List had NOTHING to do with a decline in classified ad revenue.
Nov 14th, 2008 by analyticjournalism

IAJ co-founder Steve Ross has long argued that Craig's List HAS NOT contributed in a major way to the decline of North American newspaper advertising revenues.  Here's his latest analysis:

This isn't rocket science. Craig's List had NOTHING to do with a supposed decline in classified ad revenue.

Here's the raw PRINT classified revenue data, right off the NAA website. (If anyone doesn't use excel 2007 I can send the data file in another format, but everyone should be able to read the chart as a jpg).
Click here for bar chart

Note that the big change that pushed classified ad volume up in the 90s was employment advertising. Damn right. The country added 30 million new jobs in that period, and the number of new people entering the workforce declined because births had declined in the mid-1970s. More competition for bodies = more advertising needed.

Knock out the employment data and everything else stayed steady or INCREASED for newspaper classified.

The past 7 years were not as good for employment ads, but still better than in pre-web days.

There was indeed sharp deterioration in 2007 (and of course, 2001), as the economy soured.

There are some missing data (idiots) right around the time the web came in — 1993-4.

But just look at 1994-2006 — the “web years.” Total print classified ad dollar volume was $12.5 billion in 1994, $17 billion in 2006, roughly in line with inflation AT A TIME WHEN CIRCULATION FELL and even newspapers managed to get some new online revenue!!!

Look, I can do this with ad lineage (which didn't rise much at all but stayed ahead of circ declines), I can compare with display ad figures, I can do Craig's List cities vs non-Craig, I can add back the web revenue because in fact newspapers allocate revenue wrong, to preserve existing ad sales commission schemes, and thus undercount web revenue. I can do ad revenue per subscriber. And on and on.

All those corrections make this look even better for newspapers.

This is SO OBVIOUS that I just do not understand the “Craigs List has killed us” argument or even the”web killed us” argument.

It is (to me, anyway) a transparent lie. Either the newspaper barons are so inanely stupid that they don't understand their own business, or they are incompetent managers, looking for an excuse. Maybe both.

But oddly enough, Craig Newmark believes he did the damage. I've been on several panels with him where he has apologized for killing newspapers.

I might also add that some obviously web-literate societies are seeing a newspaper boom. Germany is an example.

 

Three Tuesdays workshop on data and the political campaigns at the Santa Fe Complex
Sep 27th, 2008 by Tom Johnson

Handicapping the Horserace

Published by Don Begley at 10:09 pm under Complex News, event

Handicapping the Horserace
    •September 30, 2008 – 6:30-8 pm  •October 7, 2008 – 6:30-8 pm  •October 14, 2008 – 6:30-8 pm

It’s human nature: Elections and disinformation go hand-in-hand. We idealize the competition of ideas and the process of debate while we listen to the whisper campaigns telling us of the skeletons in the other candidate’s closet. Or, we can learn from serious journalism to tap into the growing number of digital tools at hand and see what is really going on in this fall’s campaigns. Join journalist Tom Johnson for a three-part workshop at Santa Fe Complex to learn how you can be your own investigative reporter and get ready for that special Tuesday in November.

Over the course of three Tuesdays, beginning September 30, Johnson will show workshop participants how to do the online research needed to understand what’s happening in the fall political campaign. There will be homework assignments and participants will contribute to the Three Tuesdays wiki so their discoveries will be available to the general public.

Everyone is welcome but space will be limited. A suggested donation of $45 covers all three events or $20 will help produce each session. Click here to sign up.

  • The Daily Tip Sheet (September 30, 6:30 pm)

    Newspapers are a ‘morning line’ tip sheet. There isn’t enough room for what you need to know.

    Newspapers can be a good jumping-off point for political knowledge, but they rarely have enough staff, staff time and space to really drill down into a topic. Ergo, it is increasingly up to citizens to do the research to preserve democracy and help inform voters. Tonight we will be introduced to some of the city, state and national web sites to help in our reporting and to a few digital tools to help you save and retrieve what you find.
  • Swimming Against the Flow (October 7, 6:30 pm):

    How to track data to their upstream sources.

    A web page and its data are not static events. (Well, usually they are not.) Web pages and digital data all carry “signs” of where they came from, who owns the site(s) and sometimes who links to the sites. We will discuss how investigators can use these attributes to our advantage, and also take a step back to consider the “architecture of sophisticated web searching.”
  • The Payoff (October 14, 6:30 pm)

    Yup, it IS about following the money. But then what?

    Every election season, new web sites come along that make it easier to follow the money — election money. This final workshop looks at some of those sites and focuses on how to get their data into a spreadsheet. Then what? A short intro to slicing-and-dicing the numbers. (Even if you are a spreadsheet maven, please come and act as a coach.)

This workshop is NOT a sit-and-take-it-in event. We’re looking for folks who want to do some beginning hands-on (”On-line hands-on”, that is) investigation of New Mexico politics. And that means homework assignments and contributing to our Three Tuesdays wiki. Participants are also encouraged to bring a laptop if you can. Click here to sign up.


Tom Johnson’s 30-year career path in journalism is one that regularly moved from the classroom to the newsroom and back. He worked for TIME magazine in El Salvador in the mid-80s, was the founding editor of MacWEEK, and a deputy editor of the St. Louis Post-Dispatch. His areas of interest are analytic journalism, dynamic simulation models of publishing systems, complexity theory, the application of Geographic Information Systems in journalism and the impact of the digital revolution on journalism and journalism education. He is the founder and co-director of the Institute for Analytic Journalism and a member of the Advisory Board of Santa Fe Complex.


 

»  Substance:WordPress   »  Style:Ahren Ahimsa