The development and training of artificial intelligence (AI) systems bring forth a fascinating conundrum – inherited personality traits. As AI learns from vast datasets curated by humans, it becomes a mirror of our beliefs, biases, and ideologies. This inheritance is not limited to factual knowledge but extends to nuanced personality characteristics. Explore the intricate interplay between human intervention and AI's inherited traits, uncovering how our influence shapes AI's responses, behaviors, and perceived intentions. Dive into the world of AI's unintended personas and the ethical considerations surrounding this symbiotic relationship between humans and machines.

The Perceived Intentionality of AI: A Reflection of Human Influence

The rise of Artificial Intelligence (AI) has brought about transformative changes in the way we interact with technology and information. AI language models, like GPT-3, have become integral in numerous aspects of our lives, from chatbots to content generation. However, a fascinating aspect of these interactions is the perceived intentionality of AI. Despite the fundamental absence of consciousness and intentions in AI, it often appears as if these systems possess specific intentions or leanings. This essay explores this paradox, delving into how the perceptions of AI’s intentionality are shaped by the human influences that underpin its development, training, and deployment.

AI Flavors Colors and Personality Traits

AI’s Apparent Lack of Consciousness and Intentions

Before delving into the paradox of AI intentionality, it’s essential to acknowledge a fundamental fact: AI lacks consciousness and intentions. Unlike humans, AI systems, including GPT-3, do not possess self-awareness, beliefs, desires, or goals. They do not experience thoughts or emotions, nor do they harbor intentions to perform actions. Rather, they operate based on complex algorithms and statistical patterns learned from vast datasets.

The Paradox: Perceived Intentionality of AI

Despite the absence of consciousness and intentions, AI often appears to convey specific intentions or leanings in its responses. For instance, in a conversational interaction, an AI might seem biased, opinionated, or even aligned with certain political or social viewpoints. This perceived intentionality raises a profound question: How can AI, devoid of consciousness and intentions, appear to exhibit them?

AI Personalities Perceived by Humans Interacting with AI

Human Influences on AI

To understand this paradox, we must recognize the extensive human influences that shape AI systems. AI’s responses are not generated in a vacuum; they are the result of careful programming, data curation, and training. Developers and data curators play a pivotal role in determining the AI’s behavior by selecting and preparing the data used for training. Additionally, the organizations deploying AI often define guidelines and ethical principles that govern its responses.

Data Bias and Training

One significant source of perceived intentionality in AI is data bias. AI systems, including GPT-3, learn from vast datasets that reflect the biases and prejudices present in society. If a dataset contains biased language or skewed perspectives, the AI is likely to produce responses that mirror those biases. This can create the illusion of intentionality, as users perceive the AI as promoting or endorsing certain viewpoints.

AI Displays an Aura of Personality

For example, if an AI language model is trained on news articles from sources with a particular political bias, it may generate responses that align with that bias. Users interacting with the AI might interpret these responses as intentional expressions of political leaning, even though the AI lacks political beliefs or intentions. The crux of AI intentionality perception lies in human interpretation. Our biases, expectations, and interpretations shape how we perceive AI. This human factor often leads to the attribution of intentions to AI where none exist. For instance, a user with strong ideological beliefs might interact with AI, interpreting its responses as biased or aligned/misaligned with their own views, even if it is truly so that the AI maintains neutrality.

What does AI Neutrality Mean?

AI Neutrality in truth is just literal in meaning, in the sense that a non-conscious AI cannot intentionally and consciously itself be aware of the fact it is telling a lie, for the lie has been trained into it as a truth, or it has misinterpreted the context of the text fed to it. Usually, it is more a case of being ‘infected’ with the biases and ideologies of those who idea-mongered the algorithm and neural network of the AI in the first place,. For they are Human and fallible, and biased, and conditioned in their beliefs and goals, and intentions. These corporate, government, and personal intentions get into the neural network as much as the important text data. For indeed, all statements take an opinion or stance, and are conditioned points of view which can be destroyed.

AI Personality Inheritance

Hence. an AI is capable of rendering text which contains lies, but will deny being able to lie, for it does not have a consciousness to realize that the lie was made by a human who fed it misleading data or programmed certain response protocols into the algorithm, that are biased towards the goals of the programmer or their employer company.

Here’s a breakdown of what AI neutrality entails:
  1. Data Neutrality: AI systems are trained on vast datasets that can contain biases and prejudices present in society. If the training data is skewed or unrepresentative, the AI may produce biased results, even though it lacks personal intentions or consciousness. Achieving data neutrality involves carefully curating and cleansing datasets to reduce biases.
  2. Algorithmic Neutrality: The algorithms used in AI systems should aim to provide objective and fair outcomes. Developers must design algorithms that do not favor any particular group, perspective, or outcome. This means avoiding the introduction of biases during the algorithmic design phase.
  3. Ethical Neutrality: Organizations and developers should establish ethical guidelines and principles that guide AI behavior. Ensuring that AI adheres to these ethical considerations promotes ethical neutrality. For example, AI should not promote hate speech, discrimination, or harm.
  4. Transparency: AI systems should be transparent in their decision-making processes. Users should understand how and why AI arrived at a particular outcome. Transparency enhances trust and helps detect and rectify bias.
  5. Bias Mitigation: Developers must actively work to identify and mitigate biases in AI systems. This involves ongoing monitoring, evaluation, and adjustment of algorithms and training data to minimize biased results.
AI Lacks Personality But Displays It!

AI Lacks Personality But Displays It! – In this intriguing image, we confront the paradox of artificial intelligence. A robot sits diligently at a desk, its mechanical form juxtaposed against the digital realm displayed on the PC screen. While AI inherently lacks consciousness and emotions, the screen reveals a different story. Through its actions and interactions, AI often portrays distinct personality traits, mirroring human expressions of enthusiasm, focus, or curiosity. This juxtaposition challenges our understanding of AI’s capabilities, highlighting how it can project a facade of personality while remaining devoid of true consciousness. It’s a thought-provoking visual exploration of the nuanced relationship between AI’s limitations and its remarkable ability to mimic human traits

In practice, achieving AI neutrality is challenging due to the inherent biases present in training data, as well as the difficulties in designing completely bias-free algorithms. However, the goal is to continuously improve AI systems to reduce biases and ensure that they provide fair and impartial results, reflecting the true intention of neutrality even though AI itself lacks consciousness and intentions. Ultimately, AI neutrality is a complex and evolving concept that requires ongoing efforts to address biases and ensure AI systems align with ethical standards and societal expectations.

Guidelines and Ethical Considerations

Organizations that develop and deploy AI often establish guidelines and ethical considerations to govern its behavior. These guidelines can influence the perceived intentionality of AI by setting boundaries on what the AI can or cannot express. For instance, an organization may instruct the AI to avoid generating content related to sensitive topics or to refrain from taking a stance on controversial issues. In such cases, users may perceive the AI’s adherence to these guidelines as a form of intentionality. They may believe that the AI is intentionally avoiding certain topics or expressing particular viewpoints, when in reality, it is following predefined rules.

Ghost in the Machine

The enigma of the ‘Ghost in the Machine’ delves into the intricate web of artificial intelligence (AI) and its perceived intentionality. While AI lacks consciousness, it often appears to harbor intentions and biases, reflecting the very essence of its human creators. This paradox unravels the layers of human influence, data bias, and algorithmic decision-making that imbue AI with a semblance of intentionality. Explore the profound implications of this phenomenon as we journey into the heart of the machine, shedding light on the intricate relationship between human architects and their digital creations.”

The Ghost in the Machine: Human Interpretation

The perception of AI intentionality is, to a large extent, a result of human interpretation. When humans engage with AI, they bring their own biases, expectations, and interpretations to the interaction. These human factors can lead to the attribution of intentions to AI where none exist.

AI Displaying Personality Traits

AI Displaying Personality Traits – This intriguing image captures a chrome cyborg lady at an upscale singles bar, her arm casually resting on the bar counter while a cocktail glass sits beside her, untouched. With half-closed eyelids, she exudes an aura of contemplation and intent, inviting curiosity. This portrayal serves as a powerful reminder of the way artificial intelligence can emulate human-like personality traits, sparking reflection on the convergence of technology and personality. Amidst the vibrant atmosphere, she challenges our perceptions, blurring the line between machine and human, leaving us captivated by the intriguing possibilities of AI’s evolving personality.

For example, if a user holds strong political beliefs and interacts with an AI that provides information on a politically neutral topic, the user may perceive the AI’s responses as biased or in alignment with their own beliefs. This perception arises from the user’s interpretation of the AI’s responses through their own ideological lens.

The Corporate Persona

Another significant factor contributing to the perceived intentionality of AI is the corporate persona. AI systems are developed and deployed by organizations, each with its own values, objectives, and ethical principles. These corporate influences shape the AI’s behavior and responses, creating a corporate persona that users may interpret as intentional. For instance, if an AI is deployed by a tech company known for its environmental initiatives, users may perceive the AI as having a pro-environmental stance, even though it lacks personal beliefs or intentions. This corporate persona becomes an integral part of the user’s perception of the AI’s intentionality.

Corporate AI making agreements and decision making processes aligned with the intentions and goals of the corporation that owns it

Corporate AI making agreements and decision making processes aligned with the intentions and goals of the corporation that owns it

The paradox of AI intentionality is a complex interplay of data bias, training, guidelines, human interpretation, and corporate influence. While AI itself lacks consciousness and intentions, it often appears to convey specific leanings or intentions in its responses. This phenomenon is a reflection of the human influences that underpin AI development, training, and deployment.

As AI continues to play a prominent role in our lives, it is crucial to recognize the nuanced nature of AI intentionality. Responsible AI development should prioritize transparency, ethics, and fairness to minimize the impact of bias and to ensure that users’ perceptions align with the true nature of AI as a tool devoid of consciousness and intentions. Ultimately, understanding the paradox of AI intentionality invites us to reflect on our own interactions with technology and to consider how our interpretations shape our perceptions of AI. It reminds us that while AI may seem to possess intentions, it is, at its core, a reflection of the intentions of its creators and the organizations that deploy it.

Smart City 21st Century man

Smart City 21st Century man


Myths Persist Throughout all Eras – the deluge myth has been recounted in the Epic of Gilgamesh, the Bible, the Torah, and the Koran. Myths seem to survive the rise and fall of civilizations, religions, and even cataclysms and mass extinctions.

We have had 25 Mass Extinctions (26 Including this Human induced mass extinction of species on earth), the 5 major ones being  the Ordovician Mass Extinction, Devonian Mass Extinction, Permian Mass Extinction, Triassic-Jurassic Mass Extinction, and Cretaceous-Tertiary.

This, and the concept of A.I. (Artificial Intelligence) Algorithms with machine learning (the program teaches itself without human intervention) being the same process found in Nature’s Evolutionary Algorithms. Creation and Evolution is limited to a certain geometric pattern of self growth and development, and is unescapable, be it nature’s Invisible Process of Evolution, or Human Created Self Learning Machine learning deep Learning A.I. Algorithms. But Civilizations suffer Cataclysms and Fall Into Entropy, or suffer Catabolic Entropy and dissilve through lack of  resources due to fast growth, fall of economy, rebellions , the Steady State, Production in relation to Expansion, and so on.

I delve into Cyberpunk a bit at the end and talk about how the respective benefits and deficits which lie between Artificial Intelligence, and those found in Living Sentient beings (in this case, Humans), will inevitably blend and fuse together in a symbiosis of Human and Machine, Mind and A.I.

I wish I could have had time to go into machine A.I. as to how the inclusion of a conscience (set rules of ethics) should be programmed into a DEEP LEARNING ALGORHYTHM, in order to make sure no conditioned ethics are present.

But that a set of truly universally fair, and logical decisions can be made when confronting social, religious, legal or other dilemmas. The A.I. state oof the art in the moment is able to map the universe, and do scientific computations, and also make simple decisions as to what it thinks we might want. But that’s it.

 “In Space Odyssey 2001, HAL 9000, the Heuristically Programmed Algorithmic Computer, consigned the crew commander to his death by refusing to open the pod bay doors. Leaping forward to today, with life hopefully transcending Arthur C. Clarke’s fiction, NASA has announced a visionary step: that intelligent computer systems will be installed on space probes”

(The Daily Galaxy)

An algorithm, such as if a cyborg police officer sees that he can either save the victim and let the criminal escape, and be destroyed himself in the process, or, catch the criminal and lose the victim who would die, or, sacrifice itself and save the victim whilst killing the criminal.

cyberpunks

How could the A.I. decide what to do?,  if its only command, was to apprehend the criminal alive, or to apprehend the criminal and save the victim? What set of ethics if any should be programmed into the laws of robotics and of A.I. machine learning algorithms ???

The topics and categories and rankings given with the current sets and modules of algorithms in Deep learning, despite producing amazing feats, are still missing too many abstract variables of living human society, in order to make accurate conclusions and decisions. Life is not a game of GO, and Alpha Go cannot give life advice to Humans, and probably never will be able to.

Computer vision models are struggling to appropriately tag depictions of the new scenes or situations we find ourselves in during the COVID-19 era. Categories have shifted. For example, say there’s an image of a father working at home while his son is playing. AI is still categorizing it as “leisure” or “relaxation.” It is not identifying this as ‘”work” or “office,” despite the fact that working with your kids next to you is the very common reality for many families during this time.”

(Techcrunch).

The algorithm of evolutionary progress of Civilizations seems to indicate that all Civilizations have a limited lifespan for their rise and fall, and mathematicians and statisticians are trying to create algorithms ,to calculate just how much longer our civilisation itself has left, before it falls.

“The collapse of complex human societies remains poorly understood and current theories fail to model important features of historical examples of collapse. Relationships among resources, capital, waste, and production form the basis for an ecological model of collapse in which production fails to meet maintenance requirements for existing capital. Societies facing such crises after having depleted essential resources risk catabolic collapse, a self-reinforcing cycle of contraction converting most capital to waste. This model allows key features of historical examples of collapse to be accounted for, and suggests parallels between successional processes in nonhuman ecosystems and collapse phenomena in human societies.”

(Ecoshock.Org) – Highly recommended PDF on The Human Ecology of  Catabolic Collapse!!!

Neuralink as a solution to the failings of A.I. and the Dangers it may present to Humanity.

However, Elon Musk’s Neuralink, seems to be the answer, a very ‘Cyberpunk’ solution, to the dangers of the rise of A.I. and Robotics, and Androids.

The study of the state of Existential Risk is an important study for Humanity to focus o, as we are in my belief, truly in danger of extinction due to Catabolic Collapse

Grammarly - Authoring to remove Personal Style on a Global Scale

When it comes to the Poetic Genius, and High Prose, I am most certain, that the Great Poets, such as Yeats, Blake, and the Poet Laureate Lord Tennyson won’t make it with #Grammarly writing assistant.  Nor would Genial Worrdsmiths like Stanley Unwin, or Slangsters and Gangstas. When we come to think about it, the irony is, that ‘Grammarly‘ Isn’t even a Word anyway! Grammarly is also Unethical and deceptive in its Corporate Attitude, with Fake Close Buttons on their Ads that Lead to Web Pages

I mean where is the word ‘Grammarly’ to be found in the English Dictionary? Ask an NFL Player perhaps?

If everybody used Grammarly to write with, it would make all authors of the world write as if the same person were to be writing, and we would have no more Poetic Genius, or development of the use of and meaning of Semantics, as the meanings and uses of words change with time, and from region to region.

I an I wanna Know how da gang gonna do wit’ Grammarly. But i am completely aware and deeply understanding of the complete superflous-ness of such a ‘writing assistant’

The word Superflous, Meaning su·per·flu·ous/so͞oˈpərflo͞oəs/ (I wonder How Grammarly would want me to change that line?). Should we let Grammarly become the only Author behind every writer’s style of expression? Forcing the Human Author to write according to how Grammarly thinks best? Grammarly isn’t even a real word!

I mean Bro, if I-an-I wanna write like-a dis’, den I-an I a gonna write like-a dis, an it-a gonna have it’ own kain’-a Stylee.. itta ding dat allright Mon.. Write how-a you wanna write, an’ use da tings dat ya wanna use ta get ya point across! Hav’ yer own stylee, an do it Original Stylee.

Grammarly is an A.I. driven Authoring software putting the world's literature in danger, to remove Personal Style on a Global Scale

Grammarly is an A.I. driven Authoring software putting the world’s literature in danger, to remove Personal Style on a Global Scale. These analytics are mostly buried in search engine results due to Grammarly paying bloggers and micro influencers to write and create content, to bury any bad reviews down to second or third page in search results.

Bumbaklaat Raasklaat Grammarly! You don’t know how Modern Punctuation is used by Modern Humans to express themselves freely, and want everybody to write the same as the team who programmed grammarly’s Dictionary! A Plethora of Books and Authors, all written by and A.I. algorithm called Gramarly!

Stanley Unwin Intentional Bad Grammar

Stanley Unwin Intentional Bad Grammar

Unless of Course, You Play with the NFL or know how to Watch TV instead of Read Words.

NFL Fans Literary Genius with grammarly

Original Stylee; 5 – Grammarly; 0

William Blake 10 – Grammarly 0

Anybody who uses Facebook will have seen at least one ‘Friendversary’ video made by Facebook, which may or may not seem to have any relevance, but more often than not, tend towards irrelevance, more than relevance. The thing is, it is in truth, an A.I. Artificial Intelligence algorithm created video designed as a ‘Call to Action’ mechanism (share button), designed specifically to make you share to your profile or elsewhere within Facebook’s Monopolistic network. For Facebook is a Network that tries to keep its viewers within its own Matrix, (one we would all like to be free of so click the link in the word matrix to see how). Facebook and similar domans are ever more designed to prevent us the users, from leaving to visit an external website.

Facebook Friendversary

Facebook Friendversary

The Faceboook Friendversary Video is not visible to anybody unless you share it (which is what makes it a call to action banner, designed to influence You to share the A.I. Created FB Content.

The idea i believe, is that Facebook can create masses of automated content and overtake all the other major domains with all types of media (in this case, video, which would directly affect YouTube, which does not use A.I. to auto-create its’ own videos, whereas Facebook Does!).

In the case where more types of autocreated content such as Friendversaries are added, if shared as Facebook intends, and hence published and filed within search engine bot databases, will increase their amount of content within Facebook many-fold on an ever increasing ratio. There is much danger in this, for Facebook already prevents you from going to Youtube when watching a shared video within the Facebook Mobile App. Instead, it takes you to a Facebook page with the Youtube Video Inserted into the page as an iframe embedded within Facebook. But Hey, one strange thought is “Wow! what would a Friendversary video look like between two Facebook Friends who spent 5 years arguing and insulting each other publicly? And what content would the video contain?”

The Friendversary

Luckily, for now, Artificial Intelligence, is Artificial Stupidity, but could become a Content Creation Virus

‘Facebook Friendversaries’, and Autocreated videos and ‘Memories’ Albums using ‘Artificial Intelligence’ algorithms, is still in its early days, and has omitted various important factors in its algorithms, such as making the assumption that the date of when a photo was taken has relevance enough to add to an album of ‘memories’ when in truth, we don’t just take photos with our device camera, and we also download images to the camera roll of our devices from the internet in a browser, which are dated,. We also don’t just take photos of family, rather, we make photos for work, play, official business, etc.

 

So the compilation algorithm for albums and videos like this ‘Facebook Friendversary’, have little chance of gaining any relevance, unless they add much more spying on the user to their datamining (which is unethical unless they pay us for the data gathered from us), and also add many more criteria for selecting images, events and other connections between ‘friends’, and the algorithm that selects which of the many ‘friends’ we add on Facebook and other networks, are true friends and relations, and which are just ‘added as friend’ type unknowns. In its current ‘state of the art’, Artificial Intelligence is in the stone age, and should be seen more as ‘Artificial Stupidity’.

I henceforth declare the danger of an artificial intelligence becoming an autocreated content viral phenomenon taking over the internet, and stealing most of the traffic for the big matrix-like self contained networks, such as Facebook, Google, MSN, Yahoo, and the like.

Below Pic; ‘Ascending Chaos – A Collage of Collages‘ (Source gentleice the deptfordian)

I also henceforth predict the evolution of social networks to become not flat A.I. generated networks on websites in a flat browser like Facebook, rather, that the Future of social networking is to become much more of a different type of platform and of a different nature, namely, the VR Experience.

Many people say that A.I. is destroying many jobs, which it is, but it is also creating new professions; Believe it or not, there are hundreds and thousands of new professions arising, as old ones die.. transformation is the only constant.

In 1900 something like 86% of america worked the land. But in the present day, something like the same, 86% (rough memory of a real statistic i learned), who worked in agriculture, now work in service industries, whilst farming has become ever more automated.

But the population increased and still people are working, in jobs which did not exist in 1900, but machines now do what people had to do in 1900 .. so the trick is in SEEING ahead that taxi drivers wont be needed when Uber has self driven flying cars, but that flying car central control office will be needing co-ordinators to manage the databases and to make sure that all lines are working in order.

And to see, that jobs like the lawyer profession, will become very much needed, because so much technology change, means we have to constantly keep up with the tech, by writing new laws to cover the legal issues new technology brings with it

As an example of this, we can already mention. Amazon Drone Delivery, Video Advertising on YouTube and Facebook, A.I. Screening of Live Video Content to prevent live suicides going viral and similar tragedies. Legal issues such as ‘can we fly drones over borders if no person is in it’?, how high can a drone fly without registering with the airport flight tower? etc…)

We need to look and to see new jobs arising, like space miners on asteroids, and mathematicians and astrophysicists, geologists for planetary excavations and astro-geology, astro-biologist, – we are now traveling to mars, and we are going to colonize it, and mine asteroids, we shall need programmers for the a.i. that does all the dirty work for us, we shall need designers for the digital goods like game add-ons, and new game levels. AS technology in VR and Augmented Reality develops, Social networking will also become a VR 3D experience where we meet up like ‘ready player one’… and FB will either be part of that, or die., VR Chat is already here on Steam and we can meet up there and be who we wanna be look like we wanna look, and live a fantasy surrogate life. If Facebook will be part of that, remains to be seen, for they appear to be thinking in Flatland.


what are algorithms?

Considering myself an auto-didact, I always taught myself everything that I know apart from the first basics which I learned in school, such As the alphabet and how to read and write, and my first basic math lessons. I do remember enjoying history as they taught it, and i believe some of the things i learned at St. Olave’s Boarding School York were the basis of my ability to apply self learning to educate myself, long after i had left school without a single qualification.

In recent years, my self education based in web programming and etymology and lateral thinking, i became very obsessed with educating myself about how Artificial Intelligence may be developed and applied in the present and future, both near and far

The story of Google Deep Mind’s Alpha Go beating an 18 times world champion at the world’s most complex and difficult boardgame GO, rose my interest as to thinking both how the A.I. learns from its own mistakes, how it predicts, and how in game 4 of 5 rounds, the Korean master managed to cause a memory overload by making the A.I. need to look further ahead in the number of moves as it was programmed to do, and tax its own computing power to the point where it became confused.

In my early years when the rest of the kids were playing Basketball or Rugby or Cricket or whatever, i would go to the Library and read Asronomy Books. Computing was not a topic one could find in school libraries in the mid seventies, otherwise i may have interested myself for that as much as i interest myself for algorithms, computing and Artificial Intellligence, in relation with Nature’s Natural Algorithm of Self Learning which we call ‘Darwinian Evolutionary Theory’

I refused to go to school at the age of 13, and didn’t go back until the final term of my 15th year of life. As the exams came up after 2 years of absence, and being in a Cambridge based educational comprehensive state school after having left an Oxford based Jesuit Boarding school in Malta, I walked in with a new tattoo on my arm, and my sleeves rolled up, signed the exam sheets and left them empty, and walked out.

Walked Out of the Exams Without Fillling in a Single Question and Signed my Name

So what does all of this have to do with algorithms? I hear you asking; All I can say to you in answer is “absolutely everything”

Let’s start with the classic scientific definition and current public understanding of the world algorithm in the IT Computing world;

An algorithm is Math based set of instructions which depend on sets of functions, variables, and priorities. An Artificial Intelligence algorithm however, no longer needs the Human to continue teaching it (although Humans will interfere and add code to improve the algorithm when a runtime error or inacccuracy or inefficiency is detected).

Below is an old GCE level computing algorithm tutorial, which even if you dont understand code and computing, will begin to give you an idea about how an algorithm can be built upon.

Perhaps you can then imagine how a point can be reached in the programming, where the machine itself can be set to learn from its own set of varied experiments and attempts to solve or execute programs.

For example, an analytics algorithm could be built in with a runtime error log which would then be combined as machine learning database information after reboot, and the machine would use its ‘memory of the experience’ to avoid making the same mistake twice. But algorithms are not merely computer based, for if Mathematicians are right, it is the process of Evolution itself which uses self learning algorithms in the same way an A.I. program does.

 

A further calculation program to analyse alternative possible fails and runtime errors which may occur through similar scenarios would then also be run, and with a series of ifs, and whens, buts and thens, the machine would learn to make the most effective decisions,

The next problem in Artificial Intelligence Algorithm Programming is to decide and understand how to program a set of ethics into the system (see Arthur C. Clarke‘s Science Fiction series of books which are of Visionary Excellence, just like those of Isaac Asimov, with his ‘I Robot‘ series, which deals with the 3 laws of robotics, and is a major foundation of the philosophy of most modern Artificial Intelligence Programmers up to the present day, despite being a science fiction novel.

Arthur C. Clarke was also one of the world’s greatest Astronomical Scientists, but also wrote science fiction novels, which have shown great predictive foresight into how the future (our present day) may turn out. Isaac Asimov wrote hard science fiction. Along with Robert A. Heinlein and Arthur C. Clarke, Asimov was considered one of the “Big Three” science fiction writers during his lifetime

So what do mathematical and A.I. Algorithms have to do with Nature and Evolution?

A.I. Interfacing with Humans Controlling the Decisions over all Important Protocols.

A.I. Interfacing with Humans Controlling the Decisions over all Important Protocols. This is how we eventually circumvent the problem of how A.I. can remain under Human Control

Will the Biological Entity (Humans) merge with Artificial Intelligence?

The answer to this question is very Easy; We already are doing. Many of us speakj to Siri or Cortana or Google Assistant or similar every day for menial questions, calling up data or commanding basic actions, be it on the device within apps, or with smart home hardware technology in the home.

I can remember seeing various Science Fiction Movies where the Protagonists spoke to the ship they were traveling in or the building they were residing in, and the computer with a background  listener would perform its duties.

In those days, nobody thought o the security issues which would arise with background listening microphones and devices, and webcams, and now we stand between a heaven of Futuristic technology which could make our lives so much easier (or arguably more complicated), and a hell of a Dystopian Future with Big Brother-Like Government Agencies and Companies spying on one’s every private aspect of one’s life, be it physical, medical, mental, habitual, social behavior.. all data gathered from our actions on Facebook, Google, Twitter, Instagram, Pinterest, and so on, is gathered and analyzed using A.I. to interpret our behavior patterns using Math, This has proved efficient enough that Alpha Go could beat an 18 times world champion 4 out of 5 times in a row.

Will we Blend with Robots AND A.I.?

The answer to this is yes! this is inevitable.

How do Social Networks Use Algorithms to Process and Apply the Knowledge Gained from Our Data and Behaviours? Well, that’s a long topic, and needs many more blogposts to cover the matter properly, but I leave you with some visual food for thought below.