Whilst watching “Black Mirror," episode "White Christmas” (it’s on Netflix) the show had an interesting idea with a conscious A.I. extracted from a human mind. It reflected the mind of the person it was extracted from (in order to properly work for the buyer). The A.I. at first refuses to work, thinking it’s been displaced from her own body and is enraged before the situation is explained to her, that she’s in a device called a “Cookie”. She still refuses. The main character of the episode decides to simulate thirty days of her doing nothing. She pleads with him not to do it again but still refuses to work so…he simulates a whole six months (in the span of a couple of actual minutes) but “doesn’t want to break her.” This had me wondering that due to the fact it’s a human conscious if it…er…she (?) would be a human in a respect. She’s capable of learning, adaptation, emotion and sadly, being tortured mentally. If we were to create anything in the mold, what parameters should we set?
With such an A.I, due to the fact it’s a human conscious, such a marvel in technology should have certain guidelines set upon. Right? It’s conscious, capable of articulation, learning and adapting, it’s thereby on our level, just without the body. Right? I ask because there’s no right or wrong and while fictitious today, one day it could be a reality and this is a morally grey question in regards to where our technology is going. Our answer to the questions posed reflect upon us as individuals and as a society. Is it just an A.I. that happens to be aware and have a human conscious? Or is it more and deserves rights to an extent that we deem “humane”?
If we agree it deserves some humanistic rights, would forcing it to undergo the simulation of 30 days, or six months, be torturous? By the first interpretation, it wouldn’t be torture despite it being a conscious A.I.. By the second definition: it’s inhumane to treat it as such because the mannerisms imposed are torture. Are we, or are we not, empathetic to it? Would it be slavery? In my opinion, I’d say if we reached a point in technology that we’ve gone too far in that respect and shouldn’t continue down that road (not due to a dislike or fear of A.I., purely due to human conscious being in play). Yet if implemented carefully…is it still too far?
We continue to go forward. We always have. What point is too far? As I asked, if it were implemented “correctly” is it still too far? Honestly, I don’t believe it can be implemented correctly because there’ll be people who abuse it (this is why we can’y have nice things). I trust people easily, but not to treat technology in that manner. Yes, I’m empathetic to it.
As for me, I believe it’d be too far and that we must be wary of the future in terms of what we make. We should consider our stance on it because it’s not black and white and could become a very real issue one day in our lifetimes. There’s no proper answer, just opinion. Finally, I recommend watching the “Black Mirror” episode (it’s a special) so you can gain a better understanding of what I mean. Along with the fact it’s admittedly a pretty great show.