Login

Lost your password?
Don't have an account? Sign Up
It's time to reframe the AI ethical discussion. Here's how it works:

It’s time to reframe the AI ethical discussion. Here’s how it works:

The present debate about AI, ethics, and the advantages to our global civilization is intense. The combination of huge stakes and a complicated, quickly evolving technology has produced a tremendous sense of urgency and passion surrounding this topic.

AI proponents like to portray the technology as a welcome disruptor capable of causing a worldwide revolution. Meanwhile, critics focus on the dangers of AI superintelligence, complex ethical issues like the classic trolley dilemma, and the very real implications of algorithmic prejudice.

It is all too easy to get caught up in the excitement and create a situation in which the world does not fully benefit from AI technological development. Instead, we should take a minute to get a critical perspective on the various voices competing for our attention in the field of AI ethics.

Some of these views come from companies who recognize they have been too slow to implement AI technology. Others originate from companies that jumped into AI early and took advantage of the uncertainty and lack of regulation to engage in unethical activities. Finally, in this day and age of influencers, there are individuals who bring up the issue of AI ethics in order to boost their own brand, often without the necessary qualifications.

Getting the facts straight

Clearly, there is a minefield out there, but we must forge on. This discussion is just too relevant and crucial to ignore. With that in mind, consider the following essential facts to assist illuminate this debate: The birth of Artificial General Intelligence (AGI) has been greatly overstated.

AGI is a wider definition of machine intelligence than ordinary AI. It includes computers with a variety of cognitive skills, as well as the capacity to learn and prepare for the future. It is the real-life realization of the technology depicted in science fiction novels and films, in which computers compete with humans in terms of intellect and reasoning.

However, as we’ve learned more about AI over the years, we’ve become less enthusiastic about AGI’s coming. Instead, nearly all AI systems in today’s world fall into a subclass known as machine learning (ML), which is highly restricted and only learns by example. 

In reality, several of the techniques that are presently marketed as AI are considerably older than ML. They are based on more basic statistical, expert, or logic-based algorithms. Our societal overemphasis on intelligence, on the other hand, encourages the personification of AI, decreasing human accountability.

digitalbuildersad

The complexity of ML algorithms varies, with some being more amenable to human interpretation than others. Having said that, a number of tools and approaches have been created to explore even the most obfuscated algorithms and quantify how they respond to various inputs. The difficulty is that certain algorithm stakeholders may find these tools to be extremely technical.

When an AI system is too insufficiently known to be trusted, it should generally not be used in high-stakes scenarios. In such cases, additional vetting or behavioral guardrails should be created to guarantee that the system can be deployed in a clear and safe manner for users and other stakeholders. “AI as a black box” should never be used to justify human decision-making.

AI is hardly the first technology to offer both high risk and high reward. Aside from contentious moral quandaries like the trolley dilemma, AI is now confronted with a new set of ethical quandaries. These concerns may be quieter or less visible, but they will have a far-reaching human impact.

To properly answer these concerns, AI will need to be evaluated in a coherent, calm, and comprehensive manner utilizing the systems safety approach, which identifies safety-related hazards and utilizes design or procedures to control them. Nuclear power, aviation, and biomedicine, among many other sectors, have grown into safe and dependable businesses, thanks in great part to the strict execution of such risk-based approaches.

Keeping control over AI development

We need to look at and analyze this technology for what it is. The basic reality is that all AI today and in the future is made up of ML-based systems, which are advanced statistical algorithms regulated by code and humans. These systems are controllable and should be. Risks may be identified, managed, and monitored, converting a chaotic situation into a mature technology.

The European Union’s most recent proposed rules, for example, take important steps in the right direction by designating high-risk use scenarios. The data science community aspires to create models that are congruent with social ideals and enhance outcomes. By standardizing practitioners’ expectations, the EU’s well-thought-out plan will promote innovation and industrial growth.

Those who claim we are incapable of controlling and properly deploying this technology are spreading a lie. The AI sector will undoubtedly confront significant challenges in pushing the technology’s boundaries today and in the future. We can address this problem with collaboration, transparency, and practicality.