SIDEBAR
»
S
I
D
E
B
A
R
«
Presenting data aesthetically
Jun 12th, 2005 by JTJ

Here's one of those online sites that will keep us browsing for hours.  “Information Aesthetics
weblog says it's about “form follows data – towards creative
information visualiztion.”  Indeed so.  How about links to:




  • Faucet Friend: a slip-on visualization device that dynamically changes color according
    to the temperature of the water that is exiting the faucet spout. this
    device attempts to avoid burns from scalding water at the kitchen or
    bathroom faucet by giving the user an inexpensive way to ascertain, at
    a glance, key temperature ranges.

  • Infotube: spatialization of information for virtual environments. as a clear
    example of 'cyberspace architecture', the space is entirely built up by
    information itself instead of simulating a real shopping street mapped
    in 3D space. users can literally browse through the infotubes, & be
    aware of shops, products, visitors & popularity (represented by
    orthogonal branches).


  • Google Ridefinder:  a 'street usage' visualization based on the real-time Google ridefinder maps that display the geographical position of SuperShuttles
    (buses that travel between hotels & airports) in New York. this map
    is generated from data gathered over 5 days, queried every 5 minutes,
    with each red dot representing a single SuperShuttle. one can clearly
    perceive the Manhattan outlines, possible coffee shops in Queens &
    favorite traffic bottle necks (e.g. bridges & tunnels).

The principles are here showing how creative journalism might deliver pertinent data/information to the people.

Information Aesthetics is updated often.



Still more on digital research tools
Jun 12th, 2005 by JTJ

James Fallows column in
Sunday's NYT discusses some of the frustration with keyword searching
and the El Dorado of having search engines “just answer my
question.”  Fallows points specifically to work to develop
Aquaint
The CIA, NSA and similar federal organizations are apparently quite
interested in the approach initially developed at Stanford University's
Knowledge Systems Lab.  Of deeper interest to serious researchers (or search-tool forecasters) than Fallows' column might be the lab's research papers.




Online Research Tutorials
Jun 9th, 2005 by JTJ

There
were multiple sessions at last week's IRE convention related to online
research methods and tools, reflecting the constantly dynamic nature of
that activity for journos.

We recently were referred to RDN's “Virtual Training Site.”   It's mission: “The Internet is a rich source of information for
students, lecturers and researchers. The RDN Virtual
Training Suite tutorials teach the key information
skills for the Internet environment. Learn how to use
the Internet to help with your coursework, literature
searching, teaching and research.




The
site's organization is uncommonly arranged by topic and academic
discipline instead of search engines.  While there is no category
for journalism, per se, many of the disciplines we utilize are there
and worth a look.  There are some fine tools here for educators,
both in the classroom and the newsroom.





Journalism Students Reluctant to do the Heavy Lifting?
Jun 9th, 2005 by JTJ

Floyd J. McKay,
a journalism professor emeritus at Western
Washington University, and a regular contributor to the Seattle Times
editorial pages, suggests that today's journalism students lack the
right stuff to do difficult reporting.  In “
The hardscrabble roots of investigative journalism,” he says: “Journalism students, at least in my experience, are less interested in
hard-scrabble reporting and more interested in supporting roles.”




He also says:

“…The cost of uncovering a big story can be stupendous, often
involving lawyers and computer experts as well as reporters,
photographers and editors.

Most papers would rather spend the money on airplane tickets to
cover their region's NFL or NBA teams, or so entertainment writers can
make pilgrimages to Hollywood. These investments are more likely to
attract readers, which in turn attract advertising dollars. The
intensely bottom-line newspaper chains rarely appear on the honor roll,
but always appear at the top of the profit-margin charts.

More of these investigative awards are won through the use of
computer-assisted reporting, often involving the use of complex
databases. A prize-winning team typically includes at least one
journalist who specializes in this work, and often another who
specializes in displaying the product graphically.”



It's good to see that the AP is starting to figure out the infosphere
Jun 9th, 2005 by JTJ

It's
good to see the word “taxonomy” creeping into the newsroom.  And
the AP is looking for someone who can make them.  Here's the job
posting:




TAXONOMY
DEVELOPER
The Associated
Press
New
York
, NY

In
a rapidly evolving technological environment, the Taxonomy Developer
will collaborate with journalists,  technologists, product
specialists and news librarians to coordinate taxonomy creation,
development and maintenance across media types and products, with the
goal of aiding in the efficient retrieval and distribution of
information.

The Taxonomy Developer for the
Associated Press will  develop taxonomies as well as create the taxonomy
management and implementation strategy for AP's content delivery.

Responsibilities
The taxonomy developer will help
define overall AP Taxonomy Integration Strategy for content classification,
delivery and user experience; work with Subject Matter Experts (SMEs) on
editorial, technical and product teams to develop taxonomy implementation,
process and management strategy; and help evaluate and work with appropriate
tools for taxonomy management, data collection/analysis and surfacing of new
terminology.

In addition, duties will include
selection and prioritization of appropriate taxonomy domains. This includes
developing taxonomies for new and existing products; selecting allowed values
lists for proper names, products and companies; creating extensions and
qualifiers to integrate AP's taxonomic scheme to external standards (ISO,
SIC/NAICS, etc.); and working with and extending NewsML, IPTC News Codes and
NITF. This person will work closely with the editorial, technical and product
teams ensuring the taxonomies are usable and will develop and manage automated,
semi-automated and manual processes for gathering taxonomy data, including
adding terms, synonyms, aliases and new relation types as needed.

The Taxonomy Developer will work as
part of a dynamic, multi-disciplinary team that is creating multimedia news and
information products for AP and bringing them to
market.

Qualifications include:
1) familiarity with industry
standards groups, such as ISO, SIC/NAIC,
2) understanding structural
metadata standards for content classes and entity
extraction,
3) ability to validate usability of
taxonomies with internal user groups (editorial teams) as well as external
audiences,
4) expertise with taxonomy
management and data collection/analysis,
5) surfacing of new terminology,
6) familiarity with Search and Auto
Classification tools (Autonomy http://www.autonomy.com/content/home/ and
Teragram http://www.teragram.com/); Text extraction tools (InXight
http://www.inxight.com/); Taxonomy/Ontology maintenance tools (SchemaLogic
http://www.schemalogic.com/ and Teragram
http://www.teragram.com/)

MLIS degree or 3 years experience
preferred.

For consideration, please send
cover letters and resumes to taxonomy@apjobs.org

The Associated Press is an
Affirmative Action/Equal Opportunity Employer.



"How To Conduct a Background Check"
Jun 9th, 2005 by JTJ

Interesting article on The Virtual Chase, a web site dedicated to “teaching legal professionals how to do research.”  For the details, see “”How To Conduct a Background Check.”




Why and how we should bother with Analytic Journalism
Jun 8th, 2005 by JTJ

Old friend and old pro Pat Stith has a fine essay on the Poynter Institute site, “A Guide to Computer Assisted Reporting  — Tips and tales of investigative journalism.” 
Read it, and then pass it along to colleagues in your newsroom.



Scrape the site before you go home tonight
Jun 3rd, 2005 by Tom Johnson

Nils Mulvad, one
of the early champions of analytic journalism in Europe and founder of the Danish Institute for Analytic Reporting, demo-ed
a fast web-scrapping tool at the IRE conference this week.  Web-scrapping?  It’s a way to get just the data you need from a web site that has
a dynamic search engine.  The FECinfo site is an example: the user enters
the search terms and the site’s server returns the desired results. 

As a one-off, that works OK.  But what if you need all the data on the
server?  Turn to “RoboSuite.”  It’s a point-and-shoot, build-your-own-script
application.  A good PERL coder can do
the same thing, of course, but if you can afford it, RoboSuite is a fast
solution to data harvesting.



Digitized Archives of Small Town Papers Going Online
Jun 3rd, 2005 by Tom Johnson

 

While
Google digitizes the masterworks of English literature, a Seattle
company has begun digitizing newspapers from the smallest towns in the
U.S. and offering them on the web. SmallTownPapers
has been digitizing newspaper archives for free and giving them a
presence online, while preserving a rich – and searchable – historical
record. Through the project's website, browser can see an archived
newspaper as it was printed and can also search through articles and
advertisements, and look for photos.



Interesting new book on SNA
Jun 3rd, 2005 by Tom Johnson



Robert A. Hanneman and Mark Riddle

Introduction to social network methods

Table of contents


About this book

This on-line textbook introduces many of the basics of formal approaches to the analysis of social networks. 
The text relies heavily on the
work of Freeman, Borgatti, and Everett (the authors of the UCINET software package). The materials here, and their
organization, were also very strongly influenced by the text of Wasserman and Faust, and by a graduate seminar
conducted by Professor Phillip Bonacich at UCLA.  Many other users have
also made very helpful comments and suggestions based on the first
version.   Errors and omissions, of course, are the responsibility
of the authors.

You are invited to use and redistribute this text freely
— but please acknowledge the source.

Hanneman,
Robert A. and Mark Riddle.  2005.  Introduction to social network
methods
.
  Riverside, CA:  University of California, Riverside (
published in digital form at http://faculty.ucr.edu/~hanneman/
)


Table of contents:

Preface
1.   
Social network data
2.   
Why formal methods?
3.   
Using graphs to represent social relations
4.   
Working with Netdraw to visualize graphs
5.   
Using matrices to represent social relations
6.    Working with network data
7.    Connection
8.    Embedding
9.    Ego networks
10.  Centrality and power
11.  Cliques and sub-groups
12.  Positions and roles: The idea of equivalence
13.  Measures of similarity and structural equivalence
14.  Automorphic equivalence
15.  Regular equivalence
16.  Multiplex networks
17. Two-mode networks
18.  Some statistical tools
After word

Bibliography



 

»  Substance:WordPress   »  Style:Ahren Ahimsa