The following links are no
longer functional:
http://hnn.us/articles/last-bow-35mm-film
https://www.brandeis.edu/facultyguide/person.html?emplid=0adcef42793cb212c9d013f9b84de92bfbcf6972&_gl=1*s0umtl*_ga*MTY3NzM4Njc0My4xNjkwMDg1NTQ5*_ga_MFGTX36NQY*MTY5MDA4NTU0OS4xLjEuMTY5MDA4NTU2OS40MC4wLjA.
https://blogs.brandeis.edu/socialsciences/2012/03/01/the-last-bow-for-35mm-film/
Professor Doherty has been so kind as to furnish me with a copy of
the draft he submitted to HNN, preserved in his e-mail files, but
I do not have reprint permission, so I must present a summary.
Doherty's argument is basically about the "Physicality" of 35-mm
film, in the sense of holding up lengths of film in one's hands
and looking at the frames. This is "framed" in reviews of two
recent nostalgic movies, in which characters interact with
physical film. I think I would make a distinction between
physicality and malleability. A well-equipped personal computer is
furnished with video editing software, and can readily create
derivative works of one kind or another.
----------------------------
02/25/2012 06:23 AM
To: Thomas Doherty <doherty@brandeis.edu>
RE: The Last Bow for 35mm Film, History News Network, 2-20-12
Sir:
Digital movie projectors are simply the tip of the iceberg. What
is happening to the movie industry is much more pervasive, a
wholesale technological transformation, based on the computer.
Processing moving pictures digitally requires a very great amount
of computer horsepower, so the change has taken longer than
comparable changes in print publishing, but the basic process is
much the same. There are of course digital cameras and
editing equipment, but probably the most significant element has
been the introduction of animation and video-game technology into
movie-making.
This process involves the shipping of large quantities of
craft labor to India, but it also involves rethinking the basic
idea of a lushly produced movie.
Andrew D. Todd
1249 Pineview Dr., Apt 4
Morgantown, WV 26505
http://rowboats-sd-ca.com/
Here are a series of blog comments on Techdirt.com, summing up my
position:
============================================================================
Movies as Nineteenth-Century Paintings.
It may be that Hollywood is technologically obsolete at a certain
level, at the level of
_Apocalypse Now_, that the idea of
expensively re-enacting the past in maximum possible detail, in
order to film it is obsolete. I should like to construct an
analogy of the visual image.
In 1815, the year of Waterloo, there were no cameras. The
collective sense of what the battle looked like was formed by
paintings, done many years after the fact. Producing such a
painting involved getting people to pose for each figure, etc. In
its essentials, it was rather like movie-making. One of the
more successful painters was the wife of a colonel, who was in a
position to borrow her husband's troops for a re-enactment.
Still, it has been pointed out that the "factual" content of the
images is often impossibly wrong. Things simply couldn't have
happened that way. However, people related to the picture as
if it were the truth.
The painting of recent battles was a special case of history
painting. In the late eighteenth century and the nineteenth
century, each country asserted its national identity by
commissioning large and elaborate paintings of significant
episodes in its historical past. The artists went to great deal of
trouble to research all the different elements, making sure that
the characters wore the right kind of clothes, etc. When the
painting was finished, another artist would engrave an adaptation
on a copper printing plate, taking account of the
differences between the two media, and the plate would be used to
produce paper prints which would be sold in quantity.
However, these pictures were inauthentic in one big sense. They
were images of things which, at the time they happened, were
not considered worth recording, because the collective mindset was
different. Sometimes, the incidents were themselves fictional,
invented a couple of hundred years after the fact-- or five
hundred, or a thousand, or two thousand. The Victorians were
effectively imposing their own values on the past. For a
time, they were able to fool themselves by collecting period
detail.
In 1863, the year of Gettysburg, photography existed, but
the paraphernalia necessary to take pictures filled a whole
wagon. Photographers like Mathew Brady drove over the
battlefields when it was all over, setting up their big tripod
cameras, and taking pictures of the aftermath, and taking the wet
glass photographic plates directly into the darkrooms built into
their wagons to process them on the spot. At this time, half-tone
printing had not yet been invented, so there were limits to the
circulation of these pictures. Print media, such as newspapers,
made engravings, drawing from various sources, some
photographs, some sketch-drawings, and some pure
imagination. A large selection of a couple of thousand photographs
was eventually published in the
_The Photographic History of
the Civil War_, in ten
volumes, edited by Francis Trelyvan Miller, in
1911. However, the public visual memory of the Civil War was
still defined by the big painting commissioned for a public
building. This painting drew on the same kinds of sources as
newspaper engravings.
History painting, in general, collapsed at the end of the
nineteenth century. Halftone printing had come along, making it
possible to photograph works of art, and economically reproduce
them on paper. Equally to the point, travel had become cheaper and
easier with the advent of railroads and steamships. Instead of
making new pictures out of the imagination, it was possible
to travel all over Europe and see works of art produced in
historic times, and photograph them, and accurately reproduce the
photographs for sale. In particular, the Spanish pictures were
influential, those of El Greco and Velasquez. Madrid had been a
remote city, difficult to get to, and the capital of a country
which had once been a great power, but was now lost in its
own past. In a sense, the Prado Museum and the Escoril Palace were
a kind of lost world. History painting was demoralized. In
the old pictures, people encountered a vivid way of life, which
was not merely a translocated copy of their own time.
The big new wave of painting in the 1870's and 1880's was
Impressionism. In terms of its subjects, Impressionism did not
stray very far from home. Impressionist pictures shocked a lot of
people because of their unfinished style, but at one level, they
were fairly conventional. The pictures were usually of
things which fell within the daily experience of the
upper-middle-class customers. They painted a quite large number of
pictures of little girls, engaged in their normal activities,
playing with toys or whatever, and these pictures were
commissioned by the parents. Very few impressionist pictures
had any kind of public or political import. The people who painted
public pictures were becoming artistic second-raters. Poster
artists, working for the print media, were often very political,
when they were not engaged in their main business of advertising.
A very elaborate illustration, showing more than a hundred
people, was likely to be an advertisement for a department store--
or a labor union poster. Of course, eventually, advertising
switched to photography when photography got good enough, with
rich enough production values.
By the time of the World Wars, cameras had become compact
enough and automatic enough that a news photographer, someone of
the type of Robert Capa, could carry a camera instead of a rifle,
and could keep up with the troops to such an extent that he was
likely to be killed in battle eventually. Being that far forward,
the photographer was able to produce compelling photographs, which
could be sent home in the mail as unprocessed rolls of film, and
which appeared in newspapers and magazines while the fighting was
still going on. These photographs effectively deprived the
artist of his ability to craft a collective memory. They were not
technically so good as what an artist could produce, but they were
comparatively real, and they wound up in every household, in the
form of stacks of back issues of Life magazine in the closet or
attic. Of course, during wartime, the pictures still had to pass
censorship, so they did not show moral decadence or the like.
By the time of Vietnam, ordinary soldiers were often carrying
cameras, although they naturally didn't have as much time to
use them as a combat journalist would have. Whether the
pictures were taken by amateurs or professionals, Vietnam was the
great age of the uncensored war photograph, depicting war in all
its physical brutality. Only one veil was left: that of inner
mentality. Lieutenant Calley still knew enough to lie about what
he had done at My Lai. If he was giggling hysterically
while he personally killed the small children, there is no
record of it.
Iraq took the process one step further. Reasonably efficient press
censorship was in place, but cameras had penetrated down
to the "moron" level, at places like Abu Ghraib.
According to recent press interviews, Lynndie England still cannot
understand why what she did was wrong. She simply
hasn't the education or intelligence to grasp why abusing
prisoners-of-war is shameful. It was someone like
_that_, who did not understand
the necessity of destroying potential evidence, who could casually
create souvenir-pictures, the way a hunter does with a dead deer.
And that is what the collective visual memory of Iraq will
be.
Now, of course, other things equal, film/video cameras are heavier
and bulkier than still cameras, but they are following the same
basic trajectory. Photographic images, one might add, tended to
"frame" movies, to define the visual look which movie-makers
were striving for. At a certain point it will become
impossible to make Hollywood movies which compete with the home
movies made by the participants. Something similar applies to
things like gun-camera films, video which is automatically
recorded by equipment. Large sections of a movie will be built up
out of commonly available archival footage, so abundant that it
has no scarcity value, and is not controlled by a few
organizations.
Alternately, film-makers might decide that they simply don't need
to pretend to portray certain events realistically. They can
provide animation at the cartoon level for "framing content,"
which can be done inexpensively with tools like Second Life, and
concentrate their actual filming on much smaller scenes. A
novelist varys his exactness of description according to
circumstances. He doesn't tell you exactly what a building looks
like, unless it matters. In that case, he may very well draw a
picture, or a map, or a diagram. A traditional Hollywood
film-maker has to go all-out in achieving exact period detail,
because he is trying to convince you that what he is showing you
is reality. Once he simply gives up that goal, because it
has become unattainable in the light of home movies and
surveillance footage, the rules change. He has to give you
enough visual cues so that you know what is going on, and
where to locate the action. However, that can be done with
inexpensive animation and image-processing technique.
Just as painting in the "Grand Manner" went through a fatal
loss of confidence, I suspect that film-making in the Grand
Manner will also go through a fatal loss of confidence. It will
come to be understood that if a movie cannot be made
inexpensively, that is a sign that this particular project is not
well-suited for film.
Roy Strong, _Recreating the Past: British History and the
Victorian Painter_, 1978
Gus McDonald, _Camera, Victorian Eyewitness: A History of
Photography, 1826-1913_, 1979[1980]
Max Gallo,
_The Poster in History_, 1974, 1975 [abridged
translation of
_I Manifesti_, 1972]
---------------------------------------------------------------------------------------------------------
http://www.techdirt.com/articles/20110121/03200312757/francis-ford-coppola-art-copying-file-sharing-we-want-you-to-take-us.shtml#c1656
============================================================================
The Potential For Cost-Cutting-- The Historical Parallel of
Newspapers.
Rupert Murdoch became rich, largely because, in the 1970's, he was
one of the first newspaper proprietors to recognize the economic
potential of word processors and kindred software. Word processors
liquidated the economic value of the skills of union printers.
Anyone who could type could put copy into the system, and once it
was in the system, it could be edited without being retyped.
Traditionally, typing material into a Linotype machine had been a
specialized skill, comparable to operating a keypunch machine in
data processing. In the new word-processing regime, stored data
could be just fed into the machine, and so be printed. Printing
could easily be outsourced to independent printing plants, and a
newspaper could keep on publishing, even if it was in the middle
of a violent labor strike. Even if the journalists of a local
paper went on strike, the management could put together a
presentable paper from the wire-service material and the
syndicated features, downloaded and cut-and-pasted into the page.
Murdoch, recognizing all of this, bought up newspapers which were
unprofitable because their labor expenses were too high, and
picked fights with the unions, and won the fights, so that the
newspapers became profitable. Now, eventually, of course, the
wheel turned full circle, and websites began to do to Murdoch what
he had done to the union printers. You reap what you sow.
Just as the union printers, in their desperation, resorted to
criminal acts such as throwing bricks at people, Murdock and his
friends, in their turn, and out of a similar sense of desperation,
eventually resorted to the various acts for which they are
presently being investigated.
Well, the same process is about to repeat itself in Hollywood. The
economic value of the skills of most of the employees will be
liquidated. The movie-industry unions presently worry about
"runaway productions," in Canada or New Zealand, but that is only
the beginning. The central locus of film production is shifting
from the stage to the computer. Instead of making things come
together on a stage, in front of a camera, the technologically
progressive film-maker causes things to come together inside a
computer.
Take as an example, the new Lytro camera. It is essentially a
massive array of microscopic cameras, which captures what amounts
to a hologram instead of a two dimensional image. This means than
focus and exposure do not have to be determined at the time
pictures are taken, and it also means that the camera records
depth information as well as color and brightness. The first
property means that the camera crew does not have to be so large.
The second property is more interesting, because it enables
something like "greenscreen" technique, only without the green
background. The post-processing software can be programed to
ignore anything which doesn't fall into focus inside a specified
distance range, in a particular direction. An array of Lytro
cameras can be built into a cart or a vehicle, very much on the
principle of Google Street View, and sent out to inexpensively
collect three-dimensional background scenery. So there is no
longer any need to film on location, with all the expenses that
involves, and likewise, a drastically reduced need for stage
carpenters, grips, and similar trades. The cameras and similar
equipment in the studio can be built into robots, and operated
under remote control from... wherever. Similarly, large arrays of
microphone can generate signals which can be processed to extract
the desired sound, on much the same principle as the Lytro camera.
Of course, all of this results in more post-processing work, but
that can take place in India or China. Let us say that the
developed-country labor requirements of film-making might be
reduced by a factor of ten.
-------------------------------------------------------------------------------------------
http://www.techdirt.com/blog/innovation/articles/20120210/03382117728/how-do-we-know-that-piracy-isnt-really-big-issue-because-media-companies-still-havent-needed-to-change-as-result-it.shtml#c517
============================================================================
Two-Hour Reserve, Where the Film Studios Will Go.
I've seen something like this before. When I started graduate
school (Anthropology, then History), university libraries commonly
put scarce books on two-hour reserve. This applied especially
during the critical first year, the "boot camp," when people were
being put through the wringer. The situation was that fifteen or
twenty people would all be trying to read the same book, of which
the library had only one copy, over a period of two or three days,
before the next class session, so it had to be put on two-hour
reserve in the interests of fairness. Of course you couldn't read
a book, at least, not the kind of book I'm talking about, in two
hours, but you could make photocopies. Then, if you rationed
yourself to four hours sleep, you had just barely time to read the
book in time for class, write a reading note, and to be able to
orally answer the professor's oral questions. Being in such a
program was incompatible with any kind of part-time job, of
course. The professor and the library turned a Nelsonian blind eye
to the photocopying. They couldn't ask starving graduate students
to spend fifty or a hundred dollars a day on books, even if
particular titles were still in print, but they could, by God,
require them to
_read_
that many books. Even if the books were in print, unless the
professor had pre-ordered them through the bookstore, there simply
would not have been time to find copies, this being before Borders
Bookstore took off. The professor was in the position of either
saying that graduate students were either required to buy a book,
or required to photocopy it. The professor decided what the
poorest member of the class could afford, and acted accordingly.
Of course, photocopying cost about as much as mass-market
paperback books would cost, but these weren't mass-market
paperbacks. This kind of copying did not have much economic
implication, because it involved only a tiny elite of hardcore
liberal arts graduate students, compared to whom both law students
and MBA students were both affluent and lazy. The difference, this
time around, will be that automation will supply the place of
labor. When people rent movies by the hour, they will, of course,
rip them, return the originals, and watch the ripped copies at
their leisure. The only practical defense the movie industry will
have will be to sell movies outright for rental prices.
Hollywood is not going to find a painless marketing formula which
allows business as usual. It will have to get its costs down.
Hollywood will go to Bollywood, that is, it will move its
operations to India. As "Anonymous Coward" (Dec 11th, 2009 @
10:19am) notes, the vast majority of the people involved in making
a movie do not appear on screen. They are cameramen, gaffers,
grips, soundmen, lighting men, carpenters, electricians,
costumers, make-up specialists, film editors, and a hundred other
specialized trades. However, this means that it does not matter if
they are Indians. The Indian film industry is one of the most
vibrant ones in the world, producing huge numbers of films in
multiple languages, and distributing them to Indian audiences who
are film junkies in ways which American have not been since the
1930's. At some point, American directors and leading actors will
tap into this system.
Now, as for the extras, walk-ons, etc., the largest category of
actor, such people have traditionally moved to Los Angeles,
registered with casting agencies, and then found themselves
ordinary jobs to live on while waiting for screen calls. They have
worked as waiters or cab drivers, or the like, dead-end jobs where
the employer expects a high turn-over, and doesn't particularly
mind people leaving without warning or notice, and will hire
someone on a day's notice without references. Allowing for
precariousness of employment, bit-part actors have been paid
approximately minimum-wage for the net time lost from their
table-waiting jobs. Of course, an expatriate cannot do that kind
of thing in India, but living expenses are much lower, and someone
who is stage-struck can work in the United States for a couple of
years, in the kind of job for which one does need references, eg.
teaching school, and save up enough money to live in India for a
couple of years. Indian producers and directors will discover that
they can make movies for the American market, working with
Americans who are not affiliated with the American film industry.
Once an industry moves offshore, its political influence
diminishes. It is no longer a source of steady high-wage
employment for Americans. The political base of the movie industry
is someone like a cameraman. The cameramen, etc. are not like
actors-- they are craftsmen. Within reason, a good cameraman can
film any kind of movie, which means that the cameraman can work
steadily at high wages, filming whatever is being filmed. He votes
for whoever favors the film industry, just the way autoworkers
used to vote for whoever favored the automobile industry. As the
movie industry moves offshore to cut costs, it will leave the
union cameraman behind. It will no longer have its own
congresscritters like Howard Berman or Mary Bono.
----------------------------------------------------------------------------------------
http://www.techdirt.com/articles/20091210/0526447287.shtml#c599
============================================================================
Instrument-Playing as Calligraphy.
Think what you might do if you wanted to to make the physical act
of writing difficult. You would get rid of the computer, of
course, and other keyboard devices such as typewriters, and also
the pencil, and insist on the use of the pen and un-erasable India
ink. You would require a pen that did not have an internal ink
reservoir, and which had to be periodically replenished by dipping
in an inkwell. The pen would have a broad angled nib with a narrow
edge, and the scrivener, to give him his traditional name, would
be expected to form letters with thick and thin lines in
appropriate places, and to add appropriate ornamental curlicues
and flourishes, the kind of thing you see on diplomas and
suchlike. That was the normal mode of writing in the middle ages.
In the end, you would reach a point where most people did not have
the skill to write things down in an acceptable manner, and
handwriting would become a "mystery." That is approximately where
instrumental music is.
Now, let us do the opposite. Let us think about how to make
instrumental performance easy. Let us devise an improvement on the
Theremin. Imagine a device more or less similar to a Wii-Mote,
which can be manufactured to sell for ten dollars or so, cheaper
than almost any kind of real instrument, because all of its
precision elements are packed into a chip or two. Now, take the
following conventions: Left-Right is pitch, similar to a piano
keyboard; In-Out is tempo; Up-Down is volume;
Clockwise-Counter-Clockwise is note length, relative to tempo;
Grip Pressure is pitch oscillation, ie. trilling. In these terms,
music has a physical shape, and it is possible to draw music in
the air in front of one. Alternatively, you can visualize music
with a pair of data-goggles. Most people can draw well enough to
communicate, not as well as a trained artist, of course, but
sufficiently to get an idea across. Just about everyone can sing
after a fashion, again not as well as a trained singer, but
sufficiently. This device I have described is not a toy. It is a
real instrument, and an extremely versatile one.
My guess is that such an instrument would be enough to produce a
"decimation" of pop musicians, similar to what happened to movie
actors when sound movies came in, circa 1925-30. Just about
everyone who could hear a given effect would be able to reproduce
it. There would not be a gap between reading-literacy and
writing-literacy, so to speak. Someone like Bill Wyman would no
longer be admired for his "penmanship"-- he would be obliged to
demonstrate the quality of his ideas and the quality of his
character.
------------------------------------------------------------------------------
http://www.techdirt.com/articles/20090913/1709366172.shtml#c1089
====================================================================
Your: "Elementary": Sherlock Holmes Is Back in a Big Way, History
News Network, 5-14-12
05/21/2012 06:52 PM
Bruce Chadwick <bchadwick@njcu.edu>
RE:
http://hnn.us/articles/elementary-sherlock-holmes-back-big-way
We tend to see highly sucessful historic authors as sui generis.
That is a mistake. Closer examination usually turns up large
numbers of similar authors who were not quite so commercially
successful. This is equally true of A. Conan Doyle. There are
two anthologies that I know of, titled
_Rivals
of Sherlock Holmes_, a
short one edited by Hugh Greene (1970), and a more extensive two
volume collection edited by Alan K. Russell (1979). Russell
identified at least thirty "rivals."
At that level, one can make certain observations. A Conan Doyle
was one of a disproportionate number of detective-story authors
who were physicians. At a certain level, "scientific detection"
was a claim for medical diagnostics as a philosophical system,
probably in opposition to law as a philosophical system. Or, to
put it another way, doctors and lawyers were in dispute about
who was to claim the mass of official positions which had
hitherto been occupied by hereditary noblemen, clergymen,
and professional soldiers, but which were up for grabs with the
expansion of democracy.
Of course, detective stories have very little to do with what
policemen actually do. The most basic function of the police is
to maintain the public peace, so their actions have to do with
responses to more or less overt acts, and the disputes about the
police have to do with how they respond to overt acts, such as
industrial strikes and riots, inter alia.
Andrew D. Todd
1249 Pineview Dr., Apt 4
Morgantown, WV 26505
adtodd@mail.wvnet.edu
http://rowboats-sd-ca.com/
Unpublished Comment on:
Cosma Rohilla Shalizi,
"The Work of Art in the Age of Mechanical
Reproduction,"
January 19, 2010
http://cscs.umich.edu/~crshalizi/weblog/638.html
(My Comment, 01/22/2010 07:58 PM)
I dealt with some of these issues in a paper about Second
Life:
http://rowboats-sd-ca.com/adtodd1a/fut_sl_3.htm
With respect to point 2, the issue of color: in Japan, Hiroshige
and Hokusai, circa 1800, solved the color problem
brilliantly, with a system of registered wood blocks, with marks
to line them up against corresponding marks on the
paper. An eighth of an inch or so of registration error would
not have been critical. They hadn't gotten to copper
plates and the engraver's burrin yet. They simply had one
block for each recurrent color, eg. skin, plant leaves,
water, sky, earth, etc., and used these to apply color "washes"
on top of a black-and-white print. I suppose they could have
gotten an assortment of muddy browns as a free bonus, by
combining colors. They did not have to do anything like a modern
color separation.
In respect to the public versus private dichotomy: in Europe, it
would seem to have been an obvious move to print something like
a book of hours. A book of hours was a lavishly
color-illustrated private prayerbook. It was expensive but
private. The most famous book of hours is the
_Très
Riches Heures du Duc de Berry_,
made for a French prince:
http://en.wikipedia.org/wiki/Tres_Riches_Heures_du_Duc_de_Berry
But there are others, notably that of Catherine of
Cleves. Such a book of hours could have been made
available to more people. However, what actually got produced
was the vernacular bible, the complete authentic textual
authority of the church, in the owner's native language instead
of in Latin. Visual effects were employed by the
Catholic Church, as a kind of adjunct to Springer and
Tetzel. The Protestant Reformation was therefore Iconoclastic.
The same people who printed and sold bibles were likely to be
going into churches with buckets of whitewash to obliterate the
images of the saints. They didn't want the man in the
street to respond sensually to the culture of the church--
they wanted him to respond intellectually, as a kind of
theologian.
When a new technology of artistic representation comes along,
it has a kind of scary power, because unsophisticated
people deal with it as if it were reality. This comes fairly
close to the idea of the "uncanny valley" in robotics. This
effect produces a counter-reaction of some kind.
In a related issue, look at photography as an art.
Specifically, look at Edward Steichen's
_The
Family of Man_ (1955),
published for the Museum of Modern Art. This book is about the
fullest statement of photography as a humanistic art. The
five hundred-odd pictures represent a continuation of what
painting was doing before the advent of mechanical
reproduction. At that time, the single most prestigious
venue of publication was Life Magazine, which had one of the
highest circulations of any magazine.
Incidentally, I don't know if you will have read Victor Papenek,
_Design
for the Real World_
(1972, esp. ch. 3) and Tom Wolfe's _From Bauhaus to Our
House_ (1981) and
_The Painted Word_. Papenek's book is
polemical, and Wolfe's books are satirical, but you would find
them a useful starting point for discussing the nature of
modern art.
http://hnn.us/articles/136035.html
https://web.archive.org/web/20111126010118/http://hnn.us/articles/136035.html
[The comment thread seems to be irretrievable . However, as nearly
as I can remember, Jonathan Dresner made a fairly root and branch
denunciation of historical novels. I responded with this:]
Deep Structure and Counter-Factual Novels.
Well, quite frankly, I think you [Jonathan Dresner, are] being a
bit parochial.
I don't know if you have read Marshall Sahlins little book,
_Historical Metaphors and Mythical Realities: Structure in
the Early History of the Sandwich Islands Kingdom_, 1981.
Admittedly, it has been some time since I read this, about
twenty-five years, give or take a bit, and when I went to my
Anthropology shelves looking for it, I had to get a
dustcloth and do about ten years of spring cleaning.
This is not the kind of literature that historians normally
read, of course. Sahlins taught in the Anthropology
department at Michigan-- he was one of Leslie White's group of
Cultural Evolutionists. Sahlins interprets James Cook's
final adventure in Hawaii in 1779, in terms of what he
calls "Structure of the Conjuncture," ie. the rules of the
game. Cook performed certain actions which the Hawaiians
understood to be a claim to be Lono, just as, for
example, riding into Jerusalem on a certain day on a white donkey
has been traditionally understood, since perhaps the fifth
century, B. C., to be a claim to be the Jewish Messiah. The
difference is that Cook, not being deeply versed in Hawaiian
traditions, did not know that he was making such a claim. At
that level, truth is not what happened, but the rules of
the game within which the events happened, the "deep
structure," if I may borrow a term from linguistics.
As a first step, I would like you to consider Douglas C.
Jones' novel, _The Court-Martial of George Armstrong Custer_
(1976). This novel is built around one big counter-factual
element, the proposition that Custer survived the Little Big
Horn, left for dead among a pile of dead bodies. This not a
very preposterous claim-- there are a surprising number of people
who survived the Holocaust on such terms, circa 1944. This
claim leads to a second counter-factual element, that there was a
court-martial. There was, historically, an official inquiry, in
1879, at the request of Major Marcus Reno, and great sections of
Jones' book are imported, more or less verbatim, from
the Reno Inquiry proceedings. Of course, the scope of the
Reno Inquiry was comparatively limited. Major Reno had
received his orders from Custer, not from General Alfred
Terry, commanding the forces in the Yellowstone valley, nor
yet again from General Sheridan in Chicago, or General
Sherman and President Grant in Washington. Reno had not made
the decision to divide the Seventh Cavalry into
four elements of 100-200 men each. The Reno Inquiry found
that Reno had done very well, considering his starting position,
and that is nothing more or less than the truth. By
imagining Custer to be the man on trial, Jones was able to broaden
the scope of the inquiry, as in the scene where the
prosecutor examines General Phillip Sheridan. There is another
expanding element. In historical fact, Libby Custer, with the aid
of her official biographer, Frederick Whittaker, defended
the reputation of her hero. She did this partly by a sustained
whispering campaign, and partly by having the good fortune
to outlive the participants, and thus, to have
the last word in public. Lastly, no one bothered to listen to the
Crow scouts who were still out in Montana. In the 1930's,
people like Mari Sandoz and John Neihardt went out and interviewed
all the surviving Indians of the various tribes, at the last
possible moment before they died. Mari Sandoz, in
particular, had grown up in the milieu, circa 1900, a little Swiss
immigrant girl tagging along behind an old Lakota medicine
woman as they walked over the hills of western Nebraska, looking
for medicinal plants. Jones used the device of the
court-martial to incorporate these three additional strands of
material, and bring them in collision.
"The Crow [scout Goes Ahead] had obviously come to say a great
deal more... he pauses and looks at Custer and laughs, a short
hard burst of laughter. He waves a finger toward the
cavalryman as though scolding a small child. 'Too _many_,
Yellow Hair, too _many_.'" (Jones, ch 9, p. 112, pbk. ed.)
By a device of fiction, Douglas Jones is able to bring out the
deeper truth.
I suppose I should state my personal interest. I have been
involved in web-publishing a series of counter-factual historical
novels written by my father, William L. Todd, dealing with
the Second World War and the Cold War. During peacetime,
the future belligerents were obliged to make choices about
what kind of weapons to spend their money on. When the war came,
they had no choice but to use the weapons they had,
employing the tactics planned around those weapons. My
father chose to examine the question of what would
have happened if they had made different choices ten years
earlier, treating each major battle or campaign in a
different novel. As it happens, his
Battle-of-France-that-never-was, written circa 1985-1990,
turned out to be an eerily accurate prediction of what would
happen in the Iraq war, with IED's and all. My father is a retired
Philosophy professor (U. Cincinnati). As he puts it, Philosophy is
an intellectual poaching license. In his second scholarly
book, _History as Applied Science
: A
Philosophical Study_ (Wayne State, 1972), he made a case
for a history which took account of simulation, of things
like war gaming. About five or ten years later, he took up
his own challenge, and started writing his Midway novel, and
taught himself to write novels by successively revising the
thing over a period of ten or fifteen years, while starting
other novels going at the same time. He had been playing around
with this sort of thing for years, ever since he worked in a
highly classified government computer installation in the
mid-1950's. Some of his oldest working papers for the Midway
book's underlying simulation date to 1958.
HNN Post, Jeremy Brecher and Brendan
Smit, Is Malcom Gladwell Right That Social Media is Useless for
Change?
10/18/2010 06:00 PM
[originally
http://www.hnn.us/articles/132571.html]
now:
http://hnn.us/articles/132571.html
New and Old Social Media.
In the first place, Malcolm Gladwell is talking about
"New Social Media," meaning things like Twitter and
Facebook, not about websites, blogs, or listservs. New
Social Media tend to be constructed in ways which cater to the
mentality of children. For example, Twitter messages are
limited to 140 characters, to exclude people who develop
complex and nuanced arguments, ie. adults. Facebook is an
implementation of what is sometimes called "The High
School Popularity Game." It has fairly fine-grained controls
about who can read a particular message-- like
little girls passing notes about each other in class. Little
Jenny passes a note to Little Brenda during Mrs.
Frunklemeyer's English class, saying that "Melissa's
dress is so ICKY." Much of the enthusiasm for New Social
Media is developed by a group of media theorists-- Gladwell cites
Clay Shirkey-- who have research interests growing out of
commerce, and who tend to be employed by schools of business
administration. To tap into this literature, I cannot give you any
better advice than to read Techdirt.com regularly. For
example, there is an extensive discussion of how rock
musicians can continue to make a living under new conditions.
Children are the archetypal consumers. They buy all kinds of
things which an adult would instinctively dismiss as junk. Admen,
sooner or later, wind up talking about how to market
things to children.
Websites, blogs, and listservs don't excite very much
commercial interest, largely because the tools involved have
become commodities. No one can quite see how to
make large sums of money acting as an
intermediary on a large scale. One can start a website,
blog, or listserv without becoming involved with a package of
commercial advertising. The political use of blogs,
etc. is not to organize demonstrators, but to win elections, and
to do so by appealing to the intelligence and good sense of
the moderate middle. The point of the exercise is to
engage moderate elements on the other side, so the
distinctive features of Twitter and Facebook are
broadly inappropriate. "Friends, Romans, Countrymen,
lend me your ears." A blog at its best is very
much like a newspaper at its best, though of course with
certain additional flexibilities. However, for a long time,
there have not been competing newspapers in most parts
of the country, and most newspapers are very
far from being the best they could be.
I understand that Egyptian activists have used social
media to telegraph basic factual information-- that
there will be a demonstration at such and such a
time and place-- simply in an attempt to
assemble a crowd of sufficient size that the police and
military have to back down or else resort to massive firepower.
And, yes, people in the Egyptian movement informed on
each other, but there were differences in time-scale. If you look
at these events from a military perspective, one starting point is
that there are not very many specialized riot police, paramilitary
troops, or whatever. To keep pace, they have to
be much , much more mobile than the demonstrators. It
might take several hours to collect a battalion of eight
hundred men from its various garrison activities at a base
outside of town, and move it-- as an organized unit--
to a particular urban square. By contrast, assume that an activist
organizer can telegraph a message to, say, a hundred thousand
people. That is, he causes messages to go to ten
thousand people, who "seed" the information into
a thousand places of public congregation-- coffeehouses or
whatever. A hundred thousand people hear the message
verbally within a very few minutes, and five percent of them
act on it within an hour, and begin traveling as individual
commuters to the proposed destination, taking as much as two
hours. That puts five thousand people on the site before the
paramilitary police can arrive. The demonstrators may not
win the ensuing donnybrook, but they can at least demonstrate that
the regime does not possess the ability to maintain public
order. You might look at Edward Luttwak's _Coup
d'Etat: A Practical Handbook_ (1968, 1969). A demonstration
is not quite the same as a coup d'etat, but it has a certain
similarity in tactical issues.
The initial sit-ins, in their spontaneous phase, had a
fairly limited objective: to induce Northerners to
stop unthinkingly supporting Segregation, and by so doing, to
force the Upper South to choose between its "beloved
institution" and the benefits of economic integration with
the rest of the country. For North Carolina, this was
basically a "no-brainer," once the stakes were fairly laid
on the table. This effectively confined "Jim Crow" to areas
in the Deep South, where the white majority was crazy
enough to close down the school system rather than
desegregate. Thus the things which happened in Alabama, and
especially in Mississippi, were qualitatively different from those
which happened in North Carolina. At that date,
students lived in dormitories, and assembling them was not
an issue.
============================================
SCRAP
Gladwell alludes to, but does not name, the Baader-Meinhoff
gang as an example of cohesion.
-----------
http://www.newyorker.com/reporting/2010/10/04/101004fa_fact_gladwell?currentPage=all
John Willingham, More Controversy
in Texas over Textbooks
09/27/2010 08:20 AM
Ibn Fadlan
The business about Swedish Viking filth comes, of course,
from the tenth-century Arab traveler Ibn Fadlan, who also
describes human sacrifice in a Viking funeral, and
the Vikings gang-raping the girl who was to
be sacrificed. Those facts which are amenable
to corroboration by archaeological methods
have been corroborated. I suppose the textbook
account the Texans were complaining about must
have been bowdlerized in the first place.
http://en.wikipedia.org/wiki/Ahmad_ibn_Fadlan
Of course, it can be justly pointed out that Christianity did
not reach large portions of Scandinavia
until at least the eleventh century, that the Vikings in
question were pagan, and that Christianity was
purely nominal for a considerable period thereafter. See, for
example, the account of the Christian missionary and
the berserker in the Njalsaga, which was written
down in the thirteenth century.
HNN post, Sage Ross, The Two Cultures, 50 years later
05/17/2009 02:38 AM
Who Were the Readers?
There is an interesting book, L Sprague de Camp and Catherine
Crook de Camp, _Science Fiction Handbook: Revised: How to
Write and Sell Imaginative Stories_ (1975). On page 69, they
cite market research done by John W. Campbell of _Astounding
Science Fiction_ in the late 1930's, and in 1949. The typical
reader was a thirty-year-old male scientist or engineer with a
college degree. Alternatively, this could be viewed as a bimodal
distribution, consisting of high school or college
students in science and engineering on the one hand,
and and somewhat older qualified scientists and engineers, say
about forty years old, one the other. We are talking about
people who were far from being representative of the
general population. Oddly enough, the science fiction writers,
being a bit older, and pre-GI-bill, had less
impressive formal educational credentials than the readers had.
Isaac Asimov, one of the great hard science fiction writers, not
only wrote science fiction, but also a line of popular
science paperbacks, covering the exciting new areas such as
modern physics, biochemistry and molecular genetics, neurology,
etc., at a level more accessible than, say, Scientific American,
notwithstanding that Scientific American had color
illustrations and the books didn't. As a teenager, back in
the early 1970's, I was given these books by my parents, even
while I was covering the standard high-school
science curriculum. If you view science fiction as
children's books, there is a technique of writing
children's books. They have to be written at two levels, one for
the adult reading aloud, and the other for the child being read
to. Otherwise, the child senses that the adult is bored,
more or less at the level that a dog senses things, and tunes
out in sympathy. Good children's books are full of things
which the adult understands, but the child doesn't
need to. You can see this fairly clearly in a book like _The
Wind in the Willows_ (1908). For example, in
the Wild Wood, the Mole meets a rabbit who is
experiencing the classic symptoms of acute shell-shock.
This was before the First World War, so I suppose Kenneth
Grahame must have borrowed from Stephen Crane's _Red Badge of
Courage_. If you view science fiction in fathers-and-sons terms,
one contained text is written for someone about
fourteen years old, and the other contained text is
written for an adult who happens to be
scientifically educated.
Bad children's books (eg. Harry Potter, the Stratemeyer
Syndicate stuff-- Tom Swift, Hardy Boys, Nancy Drew,
etc.) are not written at two levels. They are written down to
what so-called experts think children can understand. The basic
fallacy in this approach is that it assumes that the child's
parents are illiterate wretched peasants. At a
certain level, the whole point of the public school
system, in the age of Jane Addams, was to remove the child
from the home and prevent the parents from
exploiting the child as cheap labor. The school was a kind
of orphanage on the installment plan, and it produced
things like the Stratemeyer Syndicate.
Soft science fiction is archetypically women's science fiction,
and it has linkages to the information sciences. I don't know
about biochemistry, but computer science is a
comparatively feminine field, at times, and under the
right circumstances, approaching equal representation.
Here's a piece I put up a couple of years ago:
--------------------------------------------
Comment on
Amy H. Sturgis, Libertarianism in Mainstream Science
Fiction
http://hnn.us/blogs/comments/35114.html#comment
[I cannot locate Amy Sturgis’s comment, but here is an
interview she gave several years later, which will no doubt
reflect her views in more evolved form:]
https://www.lfs.org/newsletter/029/03/Prometheus_2903.pdf
(My comment, 02/10/2007 05:17 AM)
The Other Science Fiction Writers
I suppose one obvious point of omission in Eric S. Raymond's
"A Political History of SF" is that he does not say anything
about Womens' Science Fiction. Womens' Science Fiction is
sometimes classified as "Fantasy," because it is organized
around "magic," rather than hard science. However,
"magic" is in fact a code word for the Information
Sciences (Computer Science, molecular biology, etc.).
Hard science, in the era of people like Heinlein was
centered around thermodynamics. The information sciences do
not connect very strongly with thermodynamics, in the sense
that the ordering of things is much more important than their
heat value. The magic of fantasy is word-magic or mind magic,
in other words, software.
The big trinity of Womens' Science Fiction is Marion
Zimmer Bradley (1930-1999), Ursula LeGuin (1929-) , and
Anne McCaffrey (1926-). Of the three, Marion
Zimmer Bradley was the most prolific. Her
major literary invention, 'The Guild
of Renunciates,' or "Free Amazons," is somewhere
between a religious order and a radical feminist
commune (in fact, she said she got much of
her material by simply looking at what was going
on around her in Berkeley). She is an enormously
important figure because for many years she ran writing
schools, and edited anthologies for her students to publish
in. The number of her students who eventually published books
in their own right must be at least a couple of dozen. Anne
McCaffrey, who was also a writing teacher if I recall
correctly, wrote her major body of work about
a fictional world where freedom
of action proceeded from telepathic communication with
semi-intelligent giant reptiles (rather like a dog in mental
outlook). Ursula LeGuin, the daughter of the
anthropologist Alfred L. Kroeber, could _perhaps_ be
described as a Ghandian anarchist. In terms of sheer depth of
thought, she was probably the greatest of
the three, even though she wrote fewer books. One might
think of her bearing approximately the same literary
relationship to the other two that Emily Bronte bore to
Charlotte and Anne Bronte. Le Guin wrote three major
works, _Left Hand of Darkness_, _The Dispossessed, and the
_Earth-Sea Trilogy_. Their basic line runs approximately
as follows: "You have in fact got freedom of action.
Your power is in you mind, and short of killing you, there is
no way anyone can disempower you. Now, what do you want to do
with your power?" At the risk of trying to force words into
someone else's mouth, one can say that Bradley, McCaffrey, and
LeGuin pursued a moral communitarianism which took anarchism
as a base condition.
To put it another way, I can write programs faster than a
lawyer can write laws to restrict what the programs are
allowed to do, and I can therefore ultimately run rings around
the lawyer. What matters is what kind of software I want
to write, and feel proud to write. Most of the people who know
enough to write a botnet virus regard doing so as a shameful
thing, so botnets are contained within acceptable limits.
A couple of years ago, Cory Doctorow observed that it was no
longer possible to write science fiction because one
could no longer envision technologies which could not readily
be reduced to practice. At this stage of the game,
the major limiting factor on the information
technologies (in the largest sense of the word,
including genetic engineering) is not lack of scientific
knowledge. Rather, the major limiting factor is
political resistance, partly vested interests, and partly fear
of change.
I should like to take one modern science fiction
writer as illustrative. Orson Scott Card (1951--) is
probably best known for the character "Ender Wiggins,"
the child bred from earliest infancy for the purpose of total
war, who in all innocence becomes a genocidal
mass-murderer while still in his early teens. The
element of speculation is first and foremost a kind of
extension of Jonathan Swift's "Modest Proposal." The fantasy
is that society oversteps certain moral limits.
Note that Card chose to call his hypothetical
faster-than-light communications device an "ansible,"
explicitly linking it to LeGuin, and her almost inhumanly
patient galaxy-traveling wise men. Card replied that if
one has a communications device, one can plug it into the
controls of a war robot.
===========================================================================
http://en.wikipedia.org/wiki/Bradley%2C_Marion_Zimmer
http://en.wikipedia.org/wiki/Ursula_LeGuin
http://en.wikipedia.org/wiki/Anne_Mccaffrey
http://en.wikipedia.org/wiki/Orson_scott_card
http://www4.ncsu.edu/~tenshi/Killer_000.htm
To: Ralph M. Hitchens: I think you have to realize that people
in high places are not just scientifically illiterate. They
are artistically illiterate, and literarilly illiterate,
and philosophically illiterate, etc. They are power
junkies, in short, with very little interest in anything which
is not demonstrably related to power.
================================================
SCRAP:
At a somewhat earlier age, I think I must have gotten just about
every mechanical-scientific toy that was available:
Lincoln Logs, Tinker Toys, Erector Sets, Logo Blocks, a toy
lathe-drill press (it would only cut balsa wood, but it was
child-safe, would not remove little fingers or anything
like that), a microscope, a chemistry set, and at least
one of those electrical experiment kits you can buy at Radio
Shack. I was also given something called a Digi-Comp, a simple
computer working on the principle of a pinball machine.
Marbles rolled down chutes and tipped rocker arms back and
forth. There were also Soma Cubes, a very simple toy, a
set of rather oddly shaped building blocks designed by a
Dutch mathematician to teach mathematical intuition. A family
friend, my mother's old mathematics professor, worked out
a system of falling dominoes which did digital logic, and I was
given a set of those. I was given a set of mechanical
drawing tools of professional quality, which not only
survived childhood play, but years later, when I took mechanical
drawing in engineering school, I was still using some of
them.
In practice, when I find social science with a lot of
statistics, in the sense of correlating this and
correlating that, it usually turns out to be second-rate
work. The practitioner of statistical social science is
usually abdicating the scholar's duty to tell a
story. For example, Anthropology is the higher travel
literature.
(05/20/2009 12:30 AM)
Computer Science
Has Changed.
In the first place, "science fiction" is something of a
misnomer. Perhaps it should be "technology fiction," because the
emphasis is not primarily on science-for-the-sake-of-knowing,
but on applied science. Charles Darwin on the Beagle
doesn't really qualify. Nor does someone who spends a year
following a troop of monkeys around and describing their ecology
and sociology.
I should explain that my research in the history of
computing is mostly in the period 1940-1980, not in
the age of the internet. I used the Babbage Center Oral
Histories for the earlier period, and trade magazines and
professional journals when they became available, circa
1960. The percentage of women in many aspects of computer
programming, broadly speaking, ran about 25-35%, at least
three or four times greater than that in engineering. At this
date, this representation included bachelors degrees in computer
science. The comparatively low rate of women earning advanced
degrees in computer science was an anomaly. One point of
caution is that one should not equate academic Computer Science
with computer programming. Computer Science is something of a
failed discipline, in the sense that it was never able to
dominate its industry in the sense that Mechanical Engineering
dominated the machine-building industries. People who
spoke about Computer Science as a profession usually
had the ulterior motive of banning programming by people
who did not have Computer Science degrees. There
were always vast numbers of people who learned what Computer
Science they needed to know, and started programming, but
resisted indoctrination, so to speak.
It also depends on whether you view Computer Science from the
standpoint of biology, or from the standpoint of engineering. I
was originally trained in a branch of mechanical engineering,
Engineering Science, back in the early 1980's, and I
pulled strings to be allowed to take a sophomore-level Computer
Science sequence. This was just before personal computers
became widely available, and a lot of the difficulty of
covering the material had to do with what one might
describe as "friction," meaning for example, that the keypunches
used for writing programs on punched cards were about as
difficult to use as a Linotype (one couldn't see
what one were typing). They were considerably more
difficult to use than a typewriter, and there weren't enough of
them, so that one had to wait until the middle of the
night. The vending machines in the computer labs sold blank
punch cards in packets of fifty for a quarter (like selling
paper one page at a time), which gives one some idea of
the contemporary scale of programming. When I talked my way into
the Computer Science sequence, I had taken five programming
courses in the engineering school, for about fifteen
quarter-hours (ten semester-hours), and had written less than
five hundred lines of code, distributed over at least a dozen
programs. That was enough to get me a license to break the
rules, because other people had done still less.
Obviously, the rules changed when one had a computer of
one's own. The material could have been covered much faster if
the instructor had been in the position to assume that the
students would be able to try things out during lunchtime.
A whole series of factors like this meant that the subject
matter of computer science was tending to vanish down into high
school. Under these circumstances, nine semester-hours of
college courses could be enough to teach most of the
useful techniques of Computer Science.
In approximately 1980-85, there was a discontinuity in
Computer Science, due to the advent of the personal
computer. Big machines went into economic decline, as work was
offloaded onto cheaper little computers. Big machine companies
downsized. As the little computers grew, there was a
disconnect between commercial development and research. The
designers of a microcomputer with, say, a million
transistors would inevitably look backwards about twenty
years to a proven mainframe design of the same size, as
codified in undergraduate textbooks, not to current research,
which was likely to be about computers with a billion
transistors or more. Young men like Bill Gates realized that
they did not have to stay in school or go to work for big
companies, that they could just take coursework assignments and
commercialize them. If I were to pick one iconic event for
academic computer science, it would be the University of
Michigan's transfer of its computer science department from the
liberal arts college to the engineering school, and John
Holland's subsequent employment difficulties, which led to his
becoming involved with the Santa Fe institute in self-defense.
I suspect that what happened after 1985 was the "failed science"
phase. On a lot of campuses, computer science was simply being
absorbed into Electrical Engineering. They came up with a major
called Computer Engineering, which was effectively a sub-major
within Electrical Engineering, with a few computer science
courses added. Alternatively, Computer Science could be
subsumed into Applied Mathematics and Statistics, but that was
less common. Under the circumstances, Computer Science was
more than usually susceptible to being taken over by
people who needed the Green Card. This happened in large
sections of science and engineering anyway. Whole
departments became occupied by people from Mumbai or Taipai,
whose outlook was essentially Victorian, who still regarded the
abolition of arranged marriages and dowry as a major step
forward.
See my previous comment:
http://hnn.us/readcomment.php?id=111041&bheaders=1#111041
in:
http://hnn.us/roundup/entries/40331.html
Now, the origins of Women's Science Fiction belong to the
period before 1980, not afterwards. Ursula Le Guin did her most
creative work circa 1970. The period after 1980 is the period in
which Marion Zimmer Bradley was running her writing school, and
publishing anthologies of her students' work. I have located
examples of ironic fantasy fiction in a computer trade journal,
Datamation, from the 1970's. Parenthetically, Datamation was
increasingly written and edited by women.
Here is my considered take on the "woman question" in computers.
I have to say that the position taken by professional feminists
is excessively simplistic:
http://rowboats-sd-ca.com/adtodd1a/free_am.htm
http://rowboats-sd-ca.com/adtodd1a/sm_3_fr.pdf