Something that I have found
troubling in our study of networks is the lack of verification and gatekeeping
in the networked system; suddenly all opinions can be shared equally throughout
nodes, and therefore they can be repeated over and over again, such that the
constant qualculation of popular opinion is swayed towards a widely held
belief, regardless of its validity. This week, Moorcroft’s analysis of the
model shows a different tradeoff: analysis of the validity of the model, rather
than of the implications. This lecture drew back to another conversation this
week, in which I overheard a professor trying to convince a student about the
validity of the field of statistics. In order to persuade the student, she said
that the field of statistics had moved away from attacking each other’s methods
and moving to mixed methods, combining the strengths of multiple approaches, in
order to draw the most accurate conclusions. I have less faith in this idea
that fields have turned towards minimizing the noise about how the information
is generated in order to maximize the signal of the information itself.
Especially in the case of modeling, in which a model for the future could
inspire an immediate call to action, changing the constants of the model and
therefore making its prediction inaccurate, the tension between knowledge and
its meaning becomes more pronounced. As such, my understanding of the
relationship between scientific knowledge and its dissemination into the
general public is well described as a “general amalgam of agents and
conditions, reactions and counter-reactions, which brings social certainty and
popularity to the concept of the system” (33).
What exactly does this adjustment
mean, and how can we shift the volume and validity of information to engage
with the information itself, rather than the package it is delivered in?
No comments:
Post a Comment