Tuesday, November 26, 2013

20131126.0834

Yesterday, I was working on some of my outside teaching activities (a bit of extra money is welcome, and that means a bit of extra work needs to be, as well), and I came across John Markoff's 24 November 2013 New York Times article "Already Anticipating Terminator Ethics."  In the article, Markoff reports on the Humanoids 2013 conference, focusing on one presentation at the event: Ronald C. Arkin's "How to NOT Build a Terminator."  He provides a summary of Arkin's talk, using it to note that "we are a long way from perfecting a robot intelligent enough to disobey an order because it would violate the laws of war or humanity" and thus that humanity still must accept responsibility for the field actions of the automatons they create.  Markoff notes that there was some challenge to Arkin's ideas about the potential peril of robotics research, if only passingly and at the end of the article, indicating his fundamental agreement that continued development of autonomous military machines is ethically fraught.

I find much of interest in the article.  As a student of literature who has at many points made reference to the Good Doctor, I was pleased to see the deployment of Isaac Asimov in the article.  Any time the kind of work with which I am familiar and for my interest in which I was ridiculed or abused appears in broad reference, I am glad to see it; something in me is satisfied by the impression that I was right to familiarize myself with the material, since it allows me to be part of the conversation going on.  More formally, as a literary scholar, I appreciate the irony of using Asimov in discussion of robots being developed in the service of DARPA; Asimovian robots are predicated on not being able to cause harm to human beings, and "defense" projects (as Markoff reports that Arkin notes) are all too easily turned to the destruction of life and property.  (The seeming dodge that Markoff quotes Gill Pratt as offering suggests that such a turn is intended--although I would have to have more context to be sure.)  I appreciate that literary and figurative devices appear in "sober" reporting, and I would like to see my students learn the lesson that such things are good to know and to understand when they are seen, as in Markoff's employment of Asimov.

Despite my pleasure at seeing the Good Doctor cited, I have some quibbles with Markoff's specific use.  By referring to Asimov as a "science-fiction writer" in a science article, he creates the impression that Asimov is only a fiction writer, which is demonstrably untrue.  While the Foundation and Robot novels are perhaps his best-known work, Asimov was also a capacious writer of non-fiction, including Biblical and literary commentaries and an astonishing number of essays.  Too, he was among the professoriate at Columbia University, from which he earned his PhD at a remarkably young age (younger than I did, and I had mine before I was thirty, which is early).  To imply something of a sneering only, then, does the man a disservice.

Similar are the errors of fact in the article with reference to Asimov.  For instance, Markoff notes that Arkin's talk begins "where Asimov left off with his fourth law of robotics--'A robot may not harm humanity, or, by inaction, allow humanity to come to harm.'"  The law referenced appears initially in the 1985 novel Robots and Empire, and it marks a significant shift for a character who eventually (both in terms of composition and in terms of the Asimovian milieu) assumes a godlike character (in a bit of irony for so dedicated a humanist as was Asimov).  And it is worded differently, if only slightly, than Markoff quotes it--but I suppose that a missed preposition may be forgiven (or that a later edition of the text than mine might have changed it).  But it is not the fourth law, but the Zeroth (since zero precedes one and the law regarding humanity takes precedence over the First Law, which protects the individual human being), although calling it fourth (with the lowercase, not-a-proper-noun f) can be justified on the grounds that it was the fourth to be developed.  Still, that the wording is questionable does not argue in Markoff's favor any more than crossing a date does; Markoff early in the article notes that Asimov anticipates the need for robotic ethics fifty years ago, and he does not go far enough.  The anticipation goes as far back as the late 1930s, as the Good Doctor notes in his introductory remarks to the 1990 collection Robot Visions, and Asimov's codification of those robotic ethics appears as early as 1941 (as does the word "robotics," as the OED notes)--more than sixty years back from Markoff's piece.  Again, more accuracy ought to be given to that particular son of the Best of the Boroughs.

If I come across as something of an Asimov fanboy...I am something of an Asimov fanboy, so it makes sense.  But I am also a scholar of writing, and Markoff's writing could have been better in some ways.  The topic he treats and the conclusion he reaches about it deserve to be handled with the utmost care and diligence, so while he does well to engage with them and bring them to the attention of the general public (which the New York Times serves to do, as I have noted), if only for a short while (because most will not remember long that he has written, let along what he has written), he does less well than he ought to have done.

No comments:

Post a Comment