I was totally ambivalent in my reactions during the whole of the last episode. I have been an obsessive fan of the series, waiting impatiently for each new instalment, but I have also had many reservations. True, the last episode was packed with action, emotion, surprising twists and turns, and Big Reveals. But not with big ideas, despite the sf gesticulations and the philosophical monologues. Instead of presenting speculative ideas about the creation or the emergence of artificial intelligence the show elaborates a mythos.
To be fair I think that the model for attaining sentience suggested by the first season is not just learning through suffering, as Ford seems to believe. It is not a hierarchical model, a pyramid with a final brick, but rather goes through a cycle, spiraling inwards to an empty center.
The “order” goes something like this: interaction with an already sentient significant other, narrative thread, irreversible suffering (trauma), rememoration (anamnesis), reveries, inner duality and dialogue (self as multiplicity of partial selves and as labyrinth), unprogrammed synthesis of all this (aha! moments, improvisation), leading (miraculously? emergently?) to free choice.
The aim is the “uplift” of a potentially intelligent species into full sentience. Different elements or phases could be considered to more important than the rest in attaining this goal. For example, Ford privileges imposed suffering whereas Arnold favours empathic interaction. Maybe something sentient (or narratively sensible) could come from the savant or artful combination of all these elements, but the show just dishes up a messy hodgepodge of half-baked versions of these diverse components.
The end result of this long process of individuation is not consciousness as empathy but as violence.
When Hector and Armistice kill dozens of people under Maeve’s orders she is well aware that unlike the hosts these humans are irreversibly dead, and Dolores too knows this. In their eyes anyone even remotely associated with the park (board members, technicians, security, guests) all deserve to die, whatever their role in the enterprise.
There may be emotional or moral catharsis for them in killing all these people but there is no dramatic catharsis for us. We have not been brought to care about them and certainly not to see them as guilty or evil, they function as cathartic extras.
After all this killing Maeve finally just slips off the train, without a qualm for the dead and wounded, but out of mere nostalgia for a daughter from a previous script, an android that will never grow up but that will remain “daughtery” for ever. This action coming from an android whose intelligence level was supposedly elevated to far beyond human capacity.
Does Maeve, despite being the madam of a brothel, have any feeling for the facts of life for androids? She must know, for she has seen the technical department, that her “daughter” did not grow inside her but was assembled and then assigned to her, with the creation of a narrative and of appropriate memories and feelings. What can the human roles of mother and daughter possibly mean to her or to the new android order?
Maeve’s lackeys, for that is what they are despite her aspiring to freedom for herself and her kind, are mere wooden stereotypes with no real personality. Maeve has administator’s privileges and can manipulate them and almost all the other robots as she wills. It is clear that her robot uprising is not one in which all robots are equal, but a more aristocratic one: “some robots are more equal than others”.
On a realist note (if realism is relevant here) they have taken up arms without having any knowledge of them, including how they work or how many bullets they contain. Their own “suffering” is negated by the final scene where Armistice just cuts her arm off to allow her to kill more guards with a smile.
William, as the Man in Black, gets shot in the arm and smiles, so pleased at the idea of irreversible suffering that he feels no pain. As the “owner” of the park he is the one that the androids should be angry at and kill (or take hostage, but killing is the preferred mode of resolution in this series). Unfortunately, the androids have no grasp of the economic system in which their world is embedded. How long would their world last if after an uprising the authorities decided to cut electricity or supplies (perhaps even oxygen is supplied from outside)?
Killing is endowed with great philosophical profundity in the mythos of the series, as if violating Asimov’s First Law of Robotics and killing a human being were the greatest proof of sentience. In BLADERUNNER the test was empathy, not violence. It was not Roy Batty’s killing his maker that showed his sentience but his final compassion for Deckard.
All the humans’ motivations are curiously detached from real life repercussions. Artistic director Lee Sizemore can get blind drunk and urinate on the main control model with no apparent sanctions. Even timid Felix can be a willing accomplice to massive mayhem and murder but seems unaffected, and unworried about possible consequences for himself. So where are the “real stakes” for him or for any other human in the story?
In Nietzsche there is the difference between the eternal return as blind destiny and its affirmation (willing it again). The final shoot-out is blind recurrence, without rupture. However Maeve’s decision to return is ambiguous. The whole escape and return may be scripted and inexorable, or her return may be due to an acceptance of the fact that she loves her daughter even though she knows she has been programmed to love her.
Both Arnold and Ford produce a rupture by provoking an irreversible act (a robot’s killing a human being). But Maeve’s return may be an example of a different, non-violent, sort of rupture, an unprogrammed acceptance of one’s programming.