The following first six paragraphs are an excerpt from an online article published in March 2017 by SCIENCENODE.
Milena Tsvetkova, a sociologist, at the London School of Economics and Political Science is primarily interested in human
interaction: with other humans or with computers. “I was initially
resistant to the thought that computer programs could show interesting
social behavior,” says Tsvetkova. “But the data proved me wrong!”
The number of bots in online systems is increasing quickly. They’re
currently used to collect information, moderate forums, generate
content, and provide customer service, as well as disseminate spam and
spread fake news.
“Even if bots are not designed to interact, they find themselves in
systems with other bots and interaction is inevitable,” says Tsvetkova.
The researchers found that the same handful of bots are responsible
for most of the ‘arguments’ with other bots. Conflicts between bots tend
to occur at a slower rate and over a longer period than conflicts
between human editors.
“Interaction leads to unexpected results, even when we design for
it,” says Tsvetkova. “Bots' presence in and influence on our lives will
be increasing, and if we want to understand society, we need to
understand how we interact with these artificial agents too.”
Much of the scientific and popular discussion about artificial
intelligence has been about psychology ...whether AI is able to think or
feel the way we do, says Tsvetkova. “But now it’s time to discuss how
artificial agents are expected to interact with us.”
When studying human-human interaction, social scientists often model
individuals as ‘bots’ that follow simple rules when they meet other
agents. These modeled interactions can lead to complex patterns at the
group level, patterns that none of the individuals intended.
Its
not that bots are so different from us socially... so, we are finding
that out. Mmm, the question to consider is that dangerous or not. Well,
conflicts exist among humans and now we are finding out they exist among
bots.
Could we see a war between machines? That would
be something right out of the movie "The Matrix", wouldn't it? Well,
that's exactly what is concluded at the end of the article put out by
SCIENCENODE - See online source below.
We all like to
think, largely from watching science fiction movies, that all AI
technology will be benevolent i.e. caring and compassionate. But, that
is not true as the article posits and I concur. The main reason is that
we are in a fallen world. What does that mean? It means that this
'world' was/is corrupted. Its entropic.
The definition of entropic is having a tendency to change from a state of order to a state of disorder. And, the entropy of the universe is increasing. Therefore,
in the future it will be higher, and in the past it was lower by definition.
We
see this truth even by observing humanity, its very concept found in
the social imagination. Today, we no longer want to recognize that
genders were solid and stable as in good for society; now, there are
over 50 different kinds. We see national boundaries as once having
meaning, cultural data as something unique occurring only in a certain
place and something to be study and protected.
Today,
we see ideas such as freedom, justice, wisdom and respect being twisted.
The titles bestowed upon people of position such as 'president' or
'pope' for example reflect this. Parents were once a man and woman
living together, raising their children, teaching them the basics of
life ... and it was called common sense.
We are
definitely moving from a state of order to a state of disorder in the
social imagination. But, that just my educated opinion in my entropic
social imagination; and likely the state of the social life, the social
imagination, of a bot... lest I say more.
*Online Source ~ SCIENCENODE March 17, 2017. [https://sciencenode.org/feature/the-social-life-of-bots.php]
No comments:
Post a Comment