Artificial Intelligence & Machine learning - Oblivion ?


Christopher Nolan, who recently directed the movie, Oppenheimer gave an interesting perspective about the implications of AI in an interview. He asserted that this is the Oppenheimer moment for people who are designing AI-based systems. This is their moment to contemplate what they are unleashing upon the world, whether they can control it or not. The ramifications of AI can be far more dangerous than an atomic bomb.


Humans are the most gifted minds ever born. Since the existence of the world, the species that are imbibed with the most advanced level of intelligence (known as of yet) are the Homo Sapiens, which are us. Evolving from the family of monkeys over thousands of years, we humans are at a stage where our minds and brains are at the most advanced and superior level. The evidence is quite clear. We have made inventions, and discoveries at an enormous scale. From scientific innovations to engineering marvels, everything is visible in the world. 


But there is also another interesting aspect about it. The ability of a human being to feel, to process thoughts, to derive conclusions from various things they experience. So we can say there are two levels of intelligence associated with human beings, one where we explore our imaginations, try to put it in reality, conceive it, develop it, and finally see the results. The other one is purely the psychology of a person. On what ways a person thinks at an emotional level, political level, and how he/she can manipulate people. To do all this, you also need intelligence. Remember, leaders become powerful by not building a ship but by commanding it. 


It shows how human beings are so capable, clever, and can be cunning to influence the same species with so much effect. Now with technology evolving day by day, we have developed things that were beyond anyone's imagination. The mind of a human and its intelligence has reached a level that we have now put this same ability into machines, computers, software, etc. This in the modern world can be called Artificial Intelligence.


One of the basic questions of developing artificial intelligence is how is it possible in the first case. How are humans so capable that they can create intelligence in any other third entity that has zero intelligence? The answer lies not in the capability of a person but in the way, human intelligence can be explained. Intelligence is such that it can be described so discreetly and explicitly to anyone with so much detail that there is a possibility that an entity such as a program or a computer can learn it and make informed decisions based on it. This learning part of the system comes under the umbrella of Machine Learning.


Technology evolves every day. What's relevant today, might not be relevant tomorrow. There are ever-changing human behaviors and with that either technology evolves or it fades away. Whether in the manufacturing industry or software industry, it doesn't take much time for a great technology to vanish away. What's important here is to learn how patterns in human behavior change over time and how a technology can be built around it to suit its evolving needs. What technology is significant today can be null and void tomorrow. There are ample numbers of cases where organizations that used to dominate the world market suffered because of the technology becoming obsolete. Nokia and Kodak are prime examples of companies that couldn't upgrade the technology with which they were working and times changed rapidly. 






With the onset of artificial intelligence and machine learning tools, ChatGPT, and other players coming into the picture, there is again a growing fear that job losses will occur at many organizations. This has also been somewhat validated where automated processes and operations at many companies are being completely taken over by AI. Organizations in their job forecasting are also showing trends that suggest this. All jobs will not be taken over by AI. The statement is quite true. The main change that will happen is that skill development among humans will rapidly increase and people will have to be both - technical and entrepreneurial in terms of dealing with business and handling the technical parts of it. Gone are the days when there would be separate functional and business roles. 


Organizations want people who are skilled at all levels. Technology will evolve more and create newer horizons in the coming future. In a way what we are seeing right now is we have reached a saturation level in terms of technology. The next big glass-shattering stuff - is yet to come. What we are seeing these days is content creation. The way content creation is being done by literally anyone is quite a spectacle. Everyone has a new unique way of making content on different platforms and it has empowered people. This empowerment has happened thanks to the giant leaps in mobile computing, hardware, cloud computing, data pockets, etc. 



The dangers of AI are also quite spectacular. I mean one could see some shades of what Artificial Intelligence can do in James Cameron's Terminator movies but now it suddenly all starts to sound so true. Deep faking, financial scamming, online scamming, and online trolling through AI Bots are all happening at an exponential level and they are all based on AI. The real worry is the extent to which AI will go and self-teach itself and pilot anything to do. The algorithms in these AI-powered systems are complex and layered and with more datasets coming into it, it keep becoming smarter and more intelligent. The benefits are also profound. Based on data analytics, AIML systems can provide correct assessment of business decisions, segmentation, profiteering, and patterns and suggest ways to correct them. All of which are loaded into software platforms of today. So it can be a double-edged sword in today’s fight of virtual worlds.



One thing surely missing from the AI part is the steps and measures that should be undertaken to navigate AI’s capability more safely. Just like humans who evolve in their thinking and make bad or good decisions, AI systems should also be allowed to do that. I mean computationally AI systems cannot make mistakes, but what about moral dilemmas? What will AI systems suggest when faced with a moral decision to make? Will it be crueler than us humans? Or will they undertake a lighter path than us? The safe use of AI is very prominent these days. The way ChatGPT exploded, there is a death run by tech companies to beat each other by developing a far better AI model. In this time and age, even a small wrong measurable/quantifiable algorithm developed by a person can push us into oblivion. The question is - when?


                   Copyright ©  2023 Shivashish Panda All Rights Reserved 


Comments

Popular posts from this blog

Algonquin and Muskoka - Canada's Jewels

Finance - Investment, assets and debts

Un voyage à Montréal