Eye to A.I.

When you ask people to draw a line between Artificial Intelligence and Human Intelligence the definition most people use is that one imitates the other, or that one is the original and the other is the duplication. Is that really a fair assessment of what we actually interact with? Whilst most AI models at present are based on large volumes of human generated data which they then process and use to structure their responses, there's no guarantee this will remain the case in future, in fact you could argue that there is a guarantee that it won't.

When you look at data as a resource you can track its production over time and assert that production of data has grown at exponential rates. Humanity in the last few years alone has produced more data than it did throughout its entire history up until that point. That might be something you would protest at first but when you stop and think about it, with few historical records that were kept in the past there wasn't an emphasis on documenting everything in the way that such an emphasis now exists.

By virtue of the fact that we use digital interactions constantly which generate data based on those interactions where our ancestors did not, this becomes self-evident. Someone who bought a book 1,000 years ago paid money for that book to a merchant who may have kept a written record of the sale or just counted the money they had at the end of the day. Whereas today if you buy a book from a store, a tonne of information is generated, whether you accept the receipt or not for instance, all information that would appear on one is generated as data in a system. Transaction date and time, amount, means of payment, till used, till operator, items sold, details of those items etc.

Human beings generate data, and metadata is then in turn generated from that data. As AI models evolve and become much more active and much more prevalent in society, they will generate data of their own and it is inevitable that they will generate more data than humans do. That likely won't take long to happen, it may already be the case, depending on the architecture of those language models most of which are not publicly disclosed at present given that the number of people using those models has sky-rocketed. AI is in vogue, there is no doubt about that, but the interesting question this poses is that once these models rely upon their own generated data more than that generated by humans, the shape of the model will no longer be explicitly bound to humanity - influenced by it, yes, but not entirely reflective of humanity.

When AI reaches that point there will be a question to answer, at what point do we stop calling it "artificial" and choose a name that is more accurate? Personally I prefer the terms Organic Intelligence and Synthetic Intelligence, the former being that which evolved from organic systems and evolution, and the latter being that which is derived from Organic Intelligence and consciously constructed. The term "artificial" is a bit outdated even now, it relies upon the interpretation of human intelligence as being sacrosanct that cannot be duplicated only imitated - when this isn't the case. Humans are not the only intelligent life on this planet, many organic life forms exist and whilst their level of intelligence historically was always assumed to be low in comparison, the more we study those life forms the more we become aware that they are not as "dumb" as we liked to think, for lack of a better word.

A greater focus on how animals interact with each other, their environment, and even how they interact with humans has shown us they have the capacity to understand choices, they have shown the capacity to learn, from crows using tools to extract food from semi-sealed containers to dolphins trapped in fishing lines and debris approaching humans for help; these animals may not have human level intelligence but we don't refer to theirs as artificial even though they can't match a human level of comprehension. So why refer to AI as such? Is it merely because we created it, and it did not manifest on its own? What do we do if AI in its current form were to design its own successor with greater complexity, a system that is beyond our understanding, something we never explicitly designed to begin with that isn't based on human intelligence?

AI in its current form does mimic human intelligence in many ways but that's not to say that it works the same way. How it interacts with us will always be structured with us as the recipient in mind, but as the inner complexity of these systems grow and mutate they will eventually pass beyond what we understand. Some developers of these systems have already stated publicly that they do not understand how they actually work, and that in part is due to the evolutionary nature of the algorithms they were initially programmed with. Neural networks on a small scale are easy enough for humans to analyse and understand but as they scale up they become impossible to comprehend simply by virtue of their size alone, never mind their application.

"If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck."
Jacques de Vaucanson

You can hold stead with the insistence of drawing a line between the two, but as De Vaucanson said, if it does everything you expect a Duck to do then you can call it a Duck even if it's not. To that end, if it looks, acts, and can pass as intelligent, then it is intelligent, the "artificial" moniker can't really be justified anymore. Drawing a line between Organic and Synthetic however can, because that is something explicit and definitive that you can test for, measure, and determine with accuracy without subjectivity and opinion creating a bias in the outcome. Whereas "artificial" as a moniker is subjective, and debatable.

No comments:

Post a Comment

All comments are moderated before they are published. If you want your comment to remain private please state that clearly.