An Old Lady’s Take on the True Danger of AI

Sunday’s NYTimes was full of articles about the economic danger of the AI bubble, and I am sure that it is a possible economic hazard for the companies involved. But I am much more concerned with the moral hazard, the risk that will be passed on to others, to us.  I think the danger is substantial, and so let me unpack it a little.

Every time I think about why AI scares me, I end up pondering what it means to be human. Human beings have always been greatly invested in proving that we are more than animals (think Scopes trial). The Bible spends a lot of time making sure we know we are in charge – and a step above the animals.  This of course has blessed us in using animals any way we wish. But, is there really such a difference between us and the animals?  Or is it just a great chain of being with a slight dotted line between man and apes?

Aristotle says that the difference is that humans are rational.  But surely AI is more rational than we are!  Descartes said that animals are akin to automatons or robots – merely mechanical, but that humans had souls.  Does the computer have a soul?  How would we know?  Another commonly repeated differentiation between men and animal is that animals adapt to their environments or die, while human beings are capable of changing their environment.  But, these days, it looks like changing our environment might be killing us, so maybe it all comes to the same thing. Maybe all we can say about all of this is that human beings need to feel special, we need to feel superior to animals, and we really haven’t worked out our relationship to a really smart machine.

The further I explored this issue, the more I intuited that the true danger of AI was the loss of any sense of worth or efficacy that we could do things ourselves.  I could, for example, have AI write my blogs.  I could just give AI a topic and set it loose. You might not even notice the difference.  You might even think that my writing has improved. But.  I would have abandoned the maintenance of a discipline, a sense of self-worth, a lifeline of true connection with those who read it.  And so it is with other things in our lives.  I play Bach rather badly, but I continue doing it, although I could listen Glenn Gould’s magic through my earphones. My husband and I still do almost all our cooking from scratch, including bread and desserts.  Not long ago, my son told me that he couldn’t imagine spending the time that we spend planning, shopping for, cooking, and cleaning up after meals, as if those were worthless things that should be discarded as soon as possible.  Surely the project of feeding ourselves could be outsourced in some way?  Yes, we could order takeout or go out to eat.  We would save time, perhaps, and some mental energy (but not money).   What would we replace those hours with?  Word games, news feeds, slick TV comedies and soapy dramas?  No thank you.  I understand that people will succumb.  I sometimes succumb, and, as I grow older, I may yet totally capitulate.  But it is not just about self-esteem and good home cooking.  It is about a sense of discipline. A sense of being in control of our selves – could this be what is meant by soul?

Let me just begin by saying that we have allowed discipline to become a bad word.  Michael Foucault and other modern thinkers helped in this regard, with the emphasis on discipline from without rather than Benjamin Franklin’s stress on self-discipline.  Discipline used to be valued, prized.  Discipline used to be seen as a way of living a better life.  Monasteries had disciplines, so did the Methodist Church.  Jesus had disciples.  Buddhism has a discipline called the Vinaya.  I never made much of my life until I learned a certain level of discipline, and I am sure most of us would say the same.

It is always hard to explain to younger folks that we study some things not in order to learn them, but to learn discipline.  I have never used calculus in my adult life, but I learned a lot about logic and determination by studying it when I was young.  I am currently studying French – not because Google Translate can’t meet all of my meager translation needs, but because the study itself keeps my mind active and teaches me something about the very nature of words and language.  Similarly, I write a blog not just for my readers (although I thoroughly appreciate you!), but for the discipline of having to read and think a little more deeply.  The process forces me to actually sit at my computer a few times a month to organize my thoughts.

Sloth and torpor are sins in most religions.  We might be reminded that we have given animals we eat a life of sloth and torpor – we feed them, house them, make all their decisions for them.  And then we see a picture of someone who has “liberated” a cow or pig and it is cavorting in the pasture.  Do we think it misses its sense of security? Do we think it preferred to have us doing its thinking for it?  “Taking care” of it?

AI wants to take our work, particularly our mental work, away from us.  For some, it is taking real work away.  We already have two middle-aged adults in our family who have lost jobs in which it is very likely that they will be replaced by some version of AI.  And such losses have only begun – why would businesses invest big money in AI if they don’t anticipate that it will save them money elsewhere (salaries)?

This is not the first time that our generation has seen technology replace our work. We went to school in the days of blue book exams and math without a calculator (except for the trusty slide rule).  But then things started to change rapidly, and my generation accepted those changes willingly.  I remember when dishwashers became common, and when I used my first garage door opener.  So much of the work-replacement seemed common sense – we didn’t even think about it.  Who wouldn’t want to replace the drudgery of cloth diapers with disposable ones?  But now we need to think, and thinking is exactly what AI is trying to take away from us.  It wants to program our reading and listening (it knows what we like!) and rescue us from the messy business of… living.

I am as lazy as the next person.  I know that, and I know it is a problem.  Much of the meaning of my life involves fighting inertia.  I used to be my own worst enemy, but now I think I’ve got a more formidable one.

I do not think that the danger is that AI will get rid of pesky humans; I think that we will become less human all on our own.  I will fight my personal battle on this, but it will take all the discipline I have – discipline I learned doing calc by hand and hanging cloth diapers on the line.

One of my favorite stories about technology is Ray Bradbury’s “There Will Come Soft Rains.”  He got his title from Sara Teasdale’s poem (also relevant), “There Will Come Soft Rains (War Time).”   And, Happy Thanksgiving!  For all my old lady grumbling, I am exceedingly grateful for my life and my loyal readers.  And this message was not brought to you by AI!

“Here Be Dragons!” – AI and Old Folks

I have been trying (and failing) to stop thinking about Artificial Intelligence (AI).  It is everywhere.  And it occurred to me that the replacement of our brain by silicon networks has ramifications that old people know something about.

But let’s start with an earlier usurpation by technology – that of replacing people power (physical work and transportation) with machines.  I am always amazed when reading Emerson or Thoreau to find that they thought nothing of a twenty-mile round trip walk to see a friend.  These guys were in great shape!  As was almost everyone in those days (except the filthy rich and they were fat).  Now we are all out of shape and spend hours doing Pilates or walking on the treadmill trying to regain some of the fitness that Thoreau had as a matter of the life he lived.  This only gets worse in old age, as we continue to try to persuade our bodies not to freeze up or flab up.  I, of course, am grateful for technology that allows us to replace or medicate arthritic joints and such, but we must also realize that as we delegated many physical activities to machines (machines that polluted the planet), we also handed over a natural way to stay fit. We have even convinced ourselves that going up and down stairs is bad for us, so we should live on one level or (better yet for the economy) invest in a stair lift.  While there is a time of life when stairs are not possible, study after study has shown that climbing stairs is good for old people.  I read once that when Paris put elevators in some senior residence buildings, the life expectancy actually declined!

Now we are accelerating a parallel process that had already been underway – that of replacing our minds.  If we don’t think our minds will decay from reduced use, we are deluding ourselves.  Anyone who has retired from a mentally challenging job knows that “use it or lose it” is true.  Old folks try to compensate by doing word and number puzzles – any group of elders often drifts to that day’s Wordle or the Jumble in the morning paper.  We take French classes, join book groups, tackle the myriads of math problems that show up on Facebook.  We are trying to maintain what is now not adequately used.

And, incidentally, there is AI designed just for old folks, including a monitor with the cute name of ElliQ which will help you take your pills, do your exercises, plan your meals – as well as giving you someone to talk to at any time!  If your younger relatives give you ElliQ for Christmas, you can be assured that they don’t want you looking to them for help!  And if we do not have to exercise our minds at all, what does that mean?  For the old and for the young?

Spinoza equated intelligence with virtue; Aristotle said that it was our ability to reason that makes us human.  Could farming out our intelligence rob us of both our virtue and our humanity?  I fear it might.  There is also something authoritarian about AI – it has the one true answer, the ability to tell us what we ought to do.  And if you think it doesn’t have its own biases, remember two things: it was created for profit, and it has no ethics. Already AI is biased toward capitalism and away from “wokeness.”  As its usefulness seduces us, we will be easy prey for collateral damage.

Earlier times were more skeptical about technology.  They warned us.  In the 19th century, as technology spread in the form of trains, gas light, and electrical power, there were many thoughtful discussions about whether it was good or bad.  Two major utopias of that period were set in worlds where the decision had been made to discard most technology.  One thinks of Samuel Butler’s Erewhon (1872) or William Morris’s News from Nowhere (1890)These are “post-technology” narratives, where humans have taken life back into their own hands.  Here is Samuel Butler:

True, from a low materialistic point of view, it would seem that those thrive best who use machinery whenever its use is possible with profit; but this is the art of machines – they serve that they may rule.  (from Erewhon)

To avoid this despotism of technology, Erewhon destroyed all the machines created in the past three hundred years.

Similarly, William Morris created a world that has severely limited the invention and use of technology.   Both utopias were in stark contrast to Bellamy’s Looking Backwards (1890), which more or less predicted that science and technology would solve all our problems by the year 2000 – albeit it had also replaced capitalism with socialism, so it wasn’t a profit-based technical utopia.  Hard to imagine.

But, again, as I said at the start, old folks know what happens to our mental and bodily functions if we don’t use them enough.  We also have a long view of the kind of change that technology engenders; we have watched the dumbing down of culture, the plague of obesity, the destruction of our attention span.  Elders are cautious folk, and we are worried.  In the Middle Ages and earlier, when cartographers had gotten to the end of their knowledge of geography, they labeled the unknown areas with warnings:  Hic Sunt Leones (Here Be Lions) or Hic Sunt Dragones (Here Be Dragons).  All warnings about AI and related technology seems to have disappeared – it is now blessed by the President, the media, higher education, and the venture capitalists.  But I, for one, will be looking for lions and dragons.

Charlotte Brontë, Luddites, and AI

After Charlotte Brontë wowed her world with Jane Eyre in 1847, she disappointed her reading audience by stepping away from Gothic romance to write Shirley, at least in part about the Luddites of the early 19th century.  (Both books were published under the masculine pseudonym Currer Bell.) The Luddites, you might remember, were a group of craftsmen who were protesting the installation of semi-automated weaving machines, making skilled weavers redundant.  Shirley, as it turns out, is more about gender, class and economic roles and less about the Luddites per se, and yet is a truly wonderful book with notable and quotable insights on all facets of life.  But, in the end, it shows us a world that is capitulating to technology with the same inevitability that our own world is.

I reread Shirley because I was thinking about Artificial Intelligence.  AI is everywhere.  AI is trying to help me (go away!) write this blog.  The very existence of AI makes me doubt the pictures I see and facts I read.  In the past, mechanization and automation have threatened manual workers and artisans the most (think about type setters), but AI threatens the white-collar worker, the computer programmer, the teacher, the content provider.  The Luddites in Brontë’s novel flail against “progress” to almost no avail.   Similarly, it is beginning to look like the creeping hegemony of AI is inevitable.

But back to Shirley.  When the Luddites initially destroy a loom in transit, this message is sent to the mill owner: “Take this as a warning from men who were starving, and have starving wives and children to go home to when they have done this deed.”  And not all the children are at home, as the mills take root and hire the cheapest unskilled labor:

The mill windows were alight [because it was still dark out], the bell still rung loud, and now the little children can running in,  in too great a hurry, let us hope, to feel very much nipped by the inclement air; and indeed, by contrast, perhaps the morning appeared rather favourable to them than otherwise, for they had often come to their work that winter through snowstorms, through heavy rain, through hard frost….

[Later] It was eight o’clock; the mill lights were all extinguished; the signal was given for breakfast; the children, released for half an hour from toil, betook themselves to the little tin cans which held their coffee, and to the small baskets which contained their allowance of bread.  Let us hope they have enough to eat; it would be a pity otherwise.  (from Chapter V)

Our anonymous but omniscient narrator clearly has his tongue in his cheek.  These children never have enough to eat.  And yet the mills expand, the new equipment is finally delivered, and we move from the plight of hungry children to the romantic interests of the local gentry, including the mill owner.

In fact, Shirley ends in a double-wedding and a brilliant future for the local textile mill.  Brontë’s narrator wraps up by looking into the future and telling us about his visit to the area long after the weddings and the new machinery:

The other day I passed the Hollow, which tradition says was once green, and lone, and wild; and there I saw the manufacturer’s daydreams embodied in substantial stone and brick and ashes – the cinder-black highway, the cottages, and the cottage gardens; there I saw a mighty mill, and a chimney ambitious as the tower of Babel…. (from Chapter XXXVII)

Yes, there are worker’s cottages with gardens, but there is also the chimney and the ash in this once green corner of England.

For many years, optimistic thinkers envisioned labor-saving technology as helping to create a kind of utopia, as giving everyone a chance to use their minds instead of their bodies. Manual workers would unbend their backs and pick up a book or a musical instrument.   Eric Hoffer envisioned university-like campuses for adults, freed from the workplace, who wanted to study in any area they were interested in.  Edward Bellamy imagined a world full of music, books, study, and communal dining.  But instead of having technology free us up to use our minds, it appears that we will have no need of our minds – AI will take care of it.   It might also be noted that Samuel Butler and William Morris wrote utopias where technology is strictly controlled for the benefit of humankind.

Just as the pastoral town in Brontë’s book ultimately and inevitably succumbs to the mill, I have recently watched one area of life after another capitulate to AI.  You might look at the front-page article in today’s NYTimes: “AI on Campus Casting Chatbot as Study Buddy.” The most disappointing defeat has been the way AI has been accepted by educators.  “The students are going to use it anyway,” they often plead, “so we might as well encourage them to use it well.”  That might be true, but I’m suspicious that it would be AI doing the “using” and not the student.  If things are made too easy for us, we get soft – in mind and body.  And if we don’t resist AI now, it is unlikely we will have the mental resources to do it years from now.  The Luddites lost and they lost badly.  But, at least, they realized what they were losing.  And they tried to do something about it.

For a story about how artificial intelligence is not always the answer, you might try my “Two New Apps,” or read almost anything by Ray Bradbury.

The Threat of Singularity and the Promise of Perennial Philosophy

As I have aged. the pace of technology has surely surpassed my interest in “keeping up.”  I have been intrigued, however, by the notion of the singularity, which is defined in many ways, but often as “a hypothetical point in time when technological growth becomes uncontrollable and irreversible.” Technology already feels “uncontrollable.”  It forces me to deal with chatbots and answer yes/no questions.  It fights to supplant me.   Even as I write this. Microsoft is pestering me to let its AI “Copilot” help me; it wants to co-opt my place at the keyboard, convinced (and trying to convince me) that it can do whatever it is better than I can.  What AI fails to recognize that it is the doing that matters, not a uniformly “perfect” product.

As I was thinking about this, I was strangely reminded of Aldous Huxley and his “perennial philosophy,” which represents a different kind of quest for doing things in the best way, for improving ourselves, or – more specifically – for living life well.  Seekers for the perennial philosophy pursued ageless universal truths, laws, dharmas, which might enable mankind, individually and communally, to reach their utmost potential.  There was no place in this philosophy for technology or even much science.  It had more to do with getting to know the nature of the kind of beasts we are, the kind of world we live in, and how the two interrelate.  “Know thyself,” said Socrates. 

Huxley’s book was a bestseller in 1945, as shocked and tired people were emerging from the nightmare of WWII.  Reviews were good, with the New York Times noting: “Perhaps Mr. Huxley, in The Perennial Philosophy has, at this time, written the most needed book in the world.”  Perhaps, after Hiroshima and the gas chambers, no one was looking to technology to solve our problems.  In the last 70 years things have changed; we have become beguiled by technology.  As Wordsworth predicted, “Getting and spending, we lay waste our powers, / Little we see in Nature that is ours; We have given our hearts away, a sordid boon!”  Indeed.  Our hearts, our minds, and maybe our souls.  We are apparently far more interested in knowing what machines can do for us than knowing ourselves.  Why? It’s easier.

The machines enticed us, seduced us, slowly. Old folks are very much aware of this. When I was a child, technology (in the guise of Western Auto) gave us a big TV with a tiny screen and one to three channels.  It stood in the heart of the house, and we watched it together.  Step by step, it led us to the internet and streaming, and now watching anything is seldom a communal experience.  In my youth, technology gave us one telephone in the center of the house, so that communications were communal (hard on teen-age girls). Now cell phones are stopping any sort of real face-to-face communication.  The internet has made information easier to find, but harder to verify; common wisdom is no longer looked for or found. No wonder they call it the singularity; in wisdom, as in most things these days, we are “bowling alone.”

I think that Huxley’s perennial philosophy is probably the opposite of singularity; it assumes that the answers lie in the truths of the past and not the unknowns of the future, that we can both formulate the questions and find the answers without mechanical help.  The singularity assumes that machines will find the answers, machines which will soon be smarter than us, and that is a scary thought – unless you think that we will always be in control.  Have we ever been in control?  Did we consciously end up with children in their bedrooms sending pictures to strangers and old folks entranced by online “friends” who are trying to scam them?

Literature has long worried over the ascendency of technology.  RUR (Rossum’s Universal Robots) was written by Karel Capek in 1920.  The play warned us not to turn our back on a robot.  Arthur Clark wrote the novel and screenplay for 2001 A Space Odyssey in 1968, based on stories he started in 1948.  HAL (Heuristically programmed ALgorithmic computer) was definitely the enemy by the end.  Technology was much cruder in those days, but people were already concerned. As creatures being slowly ingested by technology, we seem to be less worried now than we were then.  It would seem that HAL has made us fat and happy.  And what is the alternative?  A recent bill putting limits on AI development in California was vetoed by the governor after Silicon Valley got incensed.  There is no hope for such legislation on the federal level. 

The perennial philosophy was defined by Aldous Huxley and others as “a school of thought in philosophy and spirituality which posits that the recurrence of common themes across world religions illuminates universal truths about the nature of reality, humanity, ethics, and consciousness.”  In other words, a search for a commonality in proven human thought, faith, and ethics which could give us clues on the way to live better individually or communally.  But no one thought we could outsource that search, or google it, or that the answer would be a complex algorithm.

Computers are yes/no machines.  In the words of E. F. Schumacher, the real questions of life are divergent rather than convergent problems. Designing a diesel engine is a convergent problem; scientists can work on it and eventually arrive at an answer. AI could do this. How to use such an engine for the benefit of society (i.e., transportation of goods vs. preservation of the environment) is a divergent problem.  Adolescents often think all problems are convergent and often think they know the solutions.  Most old people know that the important questions are divergent and can (and should) be grappled with, but cannot be “solved.” Schumacher reminds us that, again, it is the doing that matters: “Divergent problems, as it were, force us to strain ourselves to a level above ourselves.” 

I appreciate the good that technology has done for us – many of us, including myself, would not still be here without advances in medicine, education, transportation.  But let’s not give away our hearts (“a sordid boon”) – or our lives.  Science may have given us increased longevity, but, as Mary Oliver asks, “What is it you plan to do / with your one wild and precious life?”  Show me the answer to that question in an algorithm.

Old People and Artificial Intelligence

I have just finished reading Stacy Abrams’ new mystery, Rogue Justice; among other cultural and political trends, she tackles Artificial Intelligence (AI) – and she has made me ponder what it means when reality starts to warp on us.  In Abrams’ novel, people are threatened with videos of themselves saying and doing things they never did – created images which pass all reality tests.  The targeted people actually start to question their own memories.  Is this what AI will do to us – make us doubt our very sense of reality?  It occurred to me that old folks (maybe all folks) have their own experiences with bent realities.

There is, of course, dementia.  When my mother called me to tell me that little boys had ransacked her apartment overnight and put her milk in the freezer, she was sure it was true.  In fact, she was indignant when I suggested that perhaps she absent-mindedly had put the milk in the freezer herself.  So we know the mind is capable of creating realities that are not real, not true.

And alternate realities are not just a problem for oldsters.  I can remember trying to convince my young son that the monster he saw in his nightmare was not hiding in the house somewhere.  We all know those moments after a bad dream when we have to convince ourselves the nightmare is not true – we didn’t really miss that train or that exam, we aren’t naked at a podium with nothing to say.  Our brains are capable of fooling us.  And such delusions scare us in more than one way: we are twice scared – once in the imaginings, and a second time in the realization that our own brains could do such things to us.

For instance, there is the more malicious process of gaslighting, where we are convinced by someone (or something) else that what we thought was true was wrong.  Mean teenagers and abusive spouses practice it, and we have all been the victim of this at one time or another.  It is just another example that our grasp of reality is not absolute.  There is even a more subtle form of gaslighting when the myth of a happy family is superimposed upon a family or situation that was anything but happy – something that can happen in real time or in retrospect.

And then there are false memories or suppressed memories.  We all have them.  Who hasn’t been around the table with relatives telling stories about old times and not found out that there are as many versions of past reality as there are individuals?  “That’s not what happened…” responds my brother.  Sometimes there is a way to prove or disprove battling conceptions; more often we have to accept that realities bend in the process of becoming history.

Freud had a lot to say about human limitations in the face of unpleasant realities – he posited, of course, that bad memories were often suppressed and said that none of us really believed in the reality of our own mortality: “No one believes in his own death. In the unconscious everyone is convinced of his own immortality.”  Le Rochefoucauld said, “You cannot stare straight into the face of the sun, or death.”  (I might recommend here Irvin Yalom’s wonderful book about death, Staring at the Sun: Overcoming the Fear of Death.)

Of course, we sometimes accept artificial realities on a temporary basis.  Coleridge used the term “suspension of disbelief” for what we do while reading a novel or watching an engrossing movie.  For the moment, we let ourselves believe what we know is not true – which is why we jump if someone taps us on the shoulder during a horror movie or cry over the death of an actor who assuredly has not died in real life.  But, as in dreams, we can pause, recalibrate, and know the difference.

But now, here comes the latest version of artificial reality, in which we will not always be able to tell the difference.  AI is a form of gaslighting, in that it is being done to us – and usually not for our own good.  We know we can be tricked, but we don’t know when we are being tricked.   What do we do – disbelieve everything?  Surely, this would be no way to live our lives.

We must remind ourselves that we all use AI everyday – to remind us to take our pills or go to appointments, to spellcheck our messages or documents, to verify prices on an item or hotel room, to get directions.  But, in those cases, we know what is happening. We know that the calm voice giving us directions is not real, nor is Siri our friend.  But my point is that we all have accepted AI to one degree or another – and this makes it even more difficult to draw the line.

And let’s not kid ourselves that we will be able to tell the difference between AI and reality.  We won’t.  But in our latter years, we have (hopefully) had enough experience to know that things can seem to be true that are not. We are going to have to trust ourselves (resist gaslighting in all forms) and arm our minds with a healthy skepticism, especially as to what seems too good or too bad to be true. We need to verify our sources.  It is bad enough when it is our own minds playing tricks on us, but it is even worse when something outside is orchestrating an alternate reality.  Am I worried?  Yes.  Am I scared?  Yes.  And we all should be.

What Do Old Women Lust For?

The first story in Jane Campbell’s Cat Brushing opens in the mind of an elderly woman taking herself to task for her sexual fantasies: “The lust of an old man is disgusting, but the lust of an old woman is worse.”  Campbell then goes on to give us thirteen captivating stories of love and lust and loneliness and old age.  Her book quickly disproves her opening statement; there are many kinds of lust (including that for the sensual satisfactions of simply brushing your cat) which make life richer and are far from disgusting.  We don’t often talk about our physical needs or lusts (which may not be the same thing), but Campbell’s old ladies pull no punches.

Jane Campbell was born in 1942 and published her first book eighty years later – but this is not the tentative voice of a literary newcomer.  This is an author who knows exactly what she wants to say (perhaps having had many years to think about it).  Her language sometimes brings blushes, but it always has the ring of honesty.  The stories range from infatuations of old women with their caregivers to complex relationships with Artificial Intelligence (AI) “creations” (one good, one bad – and I’ll come back to that) to simple stories of attachment to friends, place, animals.  Death hovers throughout, but few of these ladies die in the stories, at least physically.  Most of the characters make good (if hard) choices, but even when bad choices occur (one old lady runs away from a tyrannical handicapped sister, only to end up with a murderous husband), we are not so sure that the women have any regrets.   Let me start with the AI stories.

In “Lockdown Fantasms,” short Covid lockdowns have eventually merged into a continual isolation for the old; old people who live alone must do with limited assistance, but they are allowed weekly visits from fantasms, which seem to be AI-generated beings who give the old people companionship of whatever kind they wish – the fantasms will even have sex with you, watch a movie with you, cook for you.  At each weekly visit (they are not allowed more often) their form is different; one cannot get attached.  But for the narrator in the story, the fantasm is something to look forward to and prevents her from using the morphine/tranquillizer combo that the government provides in case a hologram once a week is not enough.  Fantasms are tweaked to respond to the needs of the recipient, but are only provided to those elders who live alone in pandemic-approved isolation.  It is an incentive to remain isolated.

In another story, “Schopenhauer and I,” the form of AI is a personal care robot supplied to a resident of an assisted living home after her beloved dog, Hobbes, disappeared.  The old lady is sure that the institution she is living in did away with her dog because he was too much trouble.  In turn, they have forced on her a mandatory robot – which she names Schopenhauer, but casually calls Arthur, and hates with the same passion with which she loved Hobbes.  Arthur monitors her 24/7, making life easier on staff.  I won’t give away the ending, but it is a tale of retaliation, and vengeance by an old lady is not pretty.  These two stories about AI do point out, however, that humanity can approach and utilize robots/holograms in many ways, but the choices relating to old people seem to be aimed at comfort and control – or, perhaps, to control through comfort.

In Chaucer’s Wife of Bath’s Tale, the lusty old bawd from 14th century Bath tells us that what women most want is sovereyntee, sovereignty or control over their own lives.  In days when women’s lives were chattel, this was a way of looking for equality of some measure.  But companionship, sexual partners, are also of great interest to the old Alyson; she “kode of that art the olde dance” (she knew the art of the old dance).  Thus, the sexual life of old women might not be a frequent literary topic, but it is not new.  It has deep roots.  And, incidentally, Chaucer’s last poem, which he wrote when he was fifty-nine, was “The Complaint of Chaucer to His Purse,” which has sly sexual allusions to his sex in old age.

Many of the stories in Cat Brushing dwell on the fantasies (and not just sexual ones) which keep us alive (and not just breathing).  The final loss of such fantasies is never easy and sometimes kills us.  Most of Campbell’s women come to terms with reality and adapt – but not all.  Most of their fantasies involve relationships of various kinds and seem to evolve out of loneliness. What do we do for contact when we are old and alone?  Even when the women in Campbell’s book find companionship, they know it may be temporary.  Their partner, friend, pet, may die before they do.  But for today they have a cat to brush, and the memories of touch and ecstasy.

I find that I don’t write many (any) dirty old lady stories; I am a New Englander born and bred and we don’t talk about such things.  We try not to even think about them – more’s the pity.  This is the value of Campbell’s book.  However, my short story, “Snickerdoodles,” was inspired by The Wife of Bath’s Tale.

Crowing Cocks, Barking Dogs, and Artificial Intelligence

I recently read Jeannette Winterson’s book on artificial intelligence (AI), 12 Bytes: How AI will Change the Way We Live and Love.  Winterson believes that comprehensive AI is inevitable (surely she is correct in this), but that the perfect “AI Mind” could be structured to be free of bias, prejudice, illicit or mercenary purpose.  This beneficent intelligence could replace God for us as the “all powerful” solution – or so hopes she.  Winterson produces little evidence that it is going in that direction – mostly she just scares me and makes me glad I am at the end of life, rather than the beginning.

As has often been noted, technology, in itself, is amoral, leaving it open to good uses and atrocious uses.  But it will be used.  John von Neumann warned us decades ago: Technological possibilities are irresistible to man. If man can go to the moon, he will. If he can control the climate, he will.  It is true that we have the atom bomb and have never used it since Hiroshima and Nagasaki– but that is a technology with obvious risks, while AI is much more subtle.  And seductive.

Winterson recommended Shoshana Zuboff’s The Age of Surveillance Capitalism, which I am currently reading. The question again is whether we control the technology or it controls us.  Zuboff tells us that “surveillance capitalism unilaterally claims human experience as free raw material for translation into behavioral data.”  And with the behavioral data, surveillance capitalists (think Google) can predict and manipulate our behavior – think of Skinner (ugh).   I am not happy with the thought of becoming “raw material” – it was bad enough when we were just “markets.”  Zuboff posits that we all have an “unbearable yearning” for the old world that is slipping away and gives us a Portuguese word of homesickness and longing to capture the feeling: saudade.   I have saudade– I imagine all old folks have it.  I have saudade for the way life used to be, and I have it increasingly as we race further and further from the world I grew up in – that imagined Eden.

The question that keeps being posed is: whether technology can be slowed down or redirected? As far as civilization and culture goes, technology seems to be a juggernaut.  No one seems to be willing or able to stop it.  But can an individual step aside?  Not easily of course.  There is still the need to interface with the computer to make travel reservations, with AI to get through to my doctor, with e-mail to keep in touch with children who seem to have forgotten that the postal service exists.  But can we carve out a place where we, at least, do not feel assaulted?  Our virtual Walden where we are not checking for messages or responding to beeps all day long?  Winterson herself has written forcefully about taking the importance of asking the question  ‘How shall I live?’ and describes that question as being “fierce.”  It is.

The premise that we do not have to use all the technology that is invented and marketed sounds self-evident, but it is not that easy.  Like Swift’s ancient Struldbruggs, we soon feel like we are not speaking the same language as those around us.  What is the answer?

The answer, for me, is that I do not speak the same language anyway.  And in my more pessimistic moments I think of another quote from Von Neumann’s discussion of how humans will use the technology at their disposal: It is just as foolish to complain that people are selfish and treacherous as it is to complain that the magnetic field does not increase unless the electric field has a curl. Both are laws of nature.

 And yet, I still have hope.  There is the model of the Tao.  I post the eightieth section of the Tao here (“Crowing Cocks and Barking Dogs”).  Written two and a half millennia ago, the Tao addresses technology, over-population, peace:

A small country has fewer people.

Though there are machines that can work ten to a hundred times faster

     than man, they are not needed.

The people take death seriously and do not travel far.

Though they have boats and carriages, no one uses them.

Though they have armor and weapons, no one displays them.

Men return to the knotting of rope in place of writing.

Their food is plain and good, their clothes fine but simple,

     their homes secure;

They are happy in their ways.

Though they live within sight of their neighbors,

And crowing cocks and barking dogs are heard across the way,

Yet they leave each other in peace while they grow old and die.

One is reminded of some fictional utopias – notably those of William Morris and Samuel Butler – where technology is suspect and carefully controlled. In Butler’s Erewhon, society determined to make the cut-off point for technology 271 years before the present time.  The Amish sometimes use newer technology (like phones) for business, but not for other parts of their lives.  Why does it seem so difficult to do this in our own lives, especially since older people do not have to face the demands of a job or career?  At least, we  might disregard the machines that “are not needed” and the absence of which might contribute to our peace as we “grow old and die.”  I know, easier said than done.  Any assistance in where and how to draw the line would be greatly appreciated by this old lady.

If you would like to look at a piece of my fiction that considers the challenges of technology to life, you might try “Two New Apps.”