Sunday 3 November 2013

MSc Web Science - Week 5


Day 305/ Emmadukew © 2013/  CC BY-NC-SA 2.0

This week there was a strike.

Quantitative Research Methods

We looked at the chi-square test, which is used with used with categorical variables. The chi-square test is unreliable with small samples and can only test for an association, not its direction. As a rule of thumb: don’t use if > 20% of cells have expected counts < 5.

A chi-square test is undertaken to find association between variables:
1. Set hypothesis and define population
2. Assume null hypothesis 
3. Large discrepancy between observed and expected counts would indicate rejection of null hypothesis
4. Find mean pass percentages for all variables
5. Look at expected count in SPSS
6. Number of rows - 1 x number of columns - 1 = Degrees of freedom
Once we know the number of expected counts based on degrees of freedom all the others are fixed.
7. Use Chi - square table to work out critical value for 5% and 1%

Computational Thinking

Having been assigned groups at the end of last week we (Anna, Jessica, Conrad, Andrew and I) had chose a subject for our public engagement lecture ('What makes the Web the Web?') and submitted an abstract outlining what we have in mind. We also started work on lesson planning for the 6th form student computing teaching activity.


Hypertext and Web Text for Masters

Telling Tales: Hypertext as non-sequential writing which offers readers a choice of readings. Bathes – the reader fixes the text, not author. Concept of ‘ergodic’ literature – the reader must expend some non-trivial effort in creating meaning.

Non-linearity can be introduced into:
story (fabula), narrative (plot), and text/image

Interesting hypertext literature:
The electronic labyrinth
Afternoon - a story
253 or Tube Theatre

Narrative game types:
Ludus – structured, Paidia – unstructured, Aleatory – random


Foundations of Web Science

Quote of the week from Sergio Sismondo:
Science's networks are heterogeneous in the sense that they combine isolated parts of the material world, laboratory equipment, established knowledge, patrons, money, institutions, etc. These actors together create the successes of technoscience, and no one piece of a network can wholly determine the shape of the whole. (Sismondo, 2003).
Our seminar discussions have prompted me to explore how science is carried out and how results are published - which ultimately affects how findings become accepted in society. Essentially there were two main threads in my thinking. Firstly, peer reviewing doesn't appear to be working as well as may have done in the past, and an excellent briefing in the Economist entitled, Trouble at the lab: Scientists like to think of science as self-correcting. To an alarming degree, it is not (which is based on a meta-analysis of surveys questioning scientists about their misbehaviours by Daniele Fanelli and published in PLOSone). As a result of many interrelated factors, the briefing asserts that "There are errors in a lot more of the scientific papers being published, written about and acted on than anyone would normally suppose, or like to think."

The second thread of my thinking relates to how scientific research is strongly affected by institutionally and commercially induced bias. This position was reinforced by some video editing work that came my way this week via the National Institute for Health Research. They held a conference early last month to celebrate 25 years since the establishment of the Health Technology Assessment Programme and needed eleven presentation videos edited and encoded for their YouTube channel. From a number of impressive presentations the one by health services researcher, Sir Iain Chalmers, one of the founders of the Cochrane Collaboration, and coordinator of the James Lind Initiativestood out. 

Sir Iain's contention is that there is a 'stacked deck' in research activity which results in biased under-reporting of research. He says: 
Over 50% of studies are never published in full and those that do get published are an unrepresentative sample...the ones that report positive or statistically significant results are more likely to be published and outcomes for statistically significant studies have higher odds of being fully reported. (Chalmers, 2013) 
In addition to the propensity of institutions to adopt technology before it has been shown to be useful, this state of affairs results in the application of poor and potentially life-threatening practice. 

An antidote to this is outlined in the Testing Treatments web site:
  • Fair tests of treatments are needed because we will otherwise sometimes conclude that treatments are useful when they are not, and vice versa
  • Comparisons are fundamental to all fair tests of treatments
  • When treatments are compared (or a treatment is compared with no treatment) the principle of comparing ‘like with like’ is essential
  • Attempts must be made to limit bias in assessing treatment outcomes.
In the Web Science context, for 'treatments' read 'web interventions'. I believe that this approach is exteremely relevant to the conduct of research in Web Science.

Digital Literacies Student Champions

I published a new post on the Digichamps blog site: Getting Started with WordPress.

Also, I am working with Lisa Harris, Ellie and Meryl on a short presentation for the Biological Studies Careers Fair later this month. Essentially I have agreed to talk for 10 minutes on digital identity management with a focus on the reppler online reputation app.


References

Fanelli D (2009) How Many Scientists Fabricate and Falsify Research? A Systematic Review and Meta-Analysis of Survey Data. PLoS ONE 4(5): e5738. doi:10.1371/journal.pone.0005738

NIHR, 2012. Iain Chalmers, INVOLVE 2012 Conference. Video [Online] Available at: http://youtu.be/R9dke0t1QuU

Sismondo, S., 2003. “The social construction of scientific and technical realities”, from An introduction to science and technology. Oxford: Blackwell Publishers, pp.51-64. 

No comments:

Post a Comment