Seoul Music

PLUS: AI, You're Hired! But First, Let's Check for Bias

The Hitchhiker’s Guide to AI

Hello, Wednesday wanderers of the AI cosmos! We're at the halfway point of our weekly space expedition, crossing the event horizon into the unknown realms of AI. If you're feeling like a spaceship running low on fuel, don't worry, we have an arsenal of AI wisdom to recharge your intellectual engines. So, let's engage our hyperdrives and continue our journey through the mesmerizing universe of artificial intelligence!

In today’s Tracker:

  • 🧪 Research highlights: An Early Warning System for Cyber Intrusions

  • 🚨 Industry news: Robot Takes the Baton for Orchestral Performance

  • ⚖️ Policy and regulation: NYC's Novel AI Hiring Legislation

  • 🌐 AI and society: Who's Truly Vulnerable in the AI Age?

  • 🧰 Snippets

💻 Meet the New Sentinel, an Early Warning System for Cyber Intrusions

A team of international AI experts from Flinders University and Brazil has developed a ground-breaking model that employs artificial intelligence to detect software viruses, hacking attempts, and general system failures in critical infrastructure such as power, water, and communication networks.

The researchers developed a novel algorithm that is resistant to sensor data inconsistencies and can detect the onset of significant network disruptions.

This could potentially serve as a potent safeguard against equipment malfunctions, supplanting conventional diagnostic methods in a variety of crucial industries.

The model, titled 'Cubic Paraconsistent Analyser with Evidence Filter and Temporal Analysis' (or CPAet), even incorporates an 'evidence filter' into system diagnostics, taking into account contradictory evidence by evaluating the level of confidence in the sensor data.

The researchers highlight the potential of artificial intelligence to improve software applications and defect diagnostic systems in order to prevent errors in complex engineering systems, manufacturing facilities, and other vital infrastructures.

🎻 Robot Takes the Baton for Orchestral Performance in South Korea

In South Korea's National Theater of Korea, a two-armed android automaton named EveR 6 conducted the country's national orchestra in a first-of-its-kind performance.

Designed by the Korea Institute of Industrial Technology, the humanoid robot's ability to imitate precise conductor movements and control the tempo of the live performance wowed the audience.

As stated by the audience and human conductor Choi Soo-Yeoul, despite the robot's precise rhythm management, it lacks the ability to "listen" and keep the orchestra ready to engage collectively, indicating space for improvement. Song In-ho, a member of the audience, believes that incorporating artificial intelligence to comprehend and analyze music could considerably improve the robot's capabilities.

This performance exemplifies the potential for robots and humans to coexist and complement each other in artistic disciplines, as opposed to assuming a dynamic of replacement.

🗽 NYC's Novel AI Hiring Legislation

Effective July 5, New York City will implement a groundbreaking law to combat bias in artificial intelligence (AI) and algorithmic hiring and promotion tools.

Companies must disclose the use of such tools to candidates and undertake annual independent audits for potential bias, per the law.

Critics argue that the law falls short of genuinely protecting candidates from bias as AI continues to outpace regulatory efforts, despite its progress toward transparency.

There are concerns regarding audit loopholes and inadequate protections for certain groups, such as people with disabilities and older employees.

Despite the fact that the law's efficacy is under review, it represents a significant step towards concrete AI regulation in the employment process, emphasizing the need for transparency and auditing of AI systems.

🤷🏽‍♂️ Who's Truly Vulnerable in the AI Age?

In the ongoing debate surrounding the ethics of artificial intelligence (AI) and the possibility of "robot rights," researchers Abeba Birhane and Jelle van Dijk argue that the emphasis on granting rights to AI systems distracts from the more pressing ethical issue of oppressive AI use against vulnerable populations.

They note that many optimistic predictions about AI attaining a level of consciousness and intelligence equal to or surpassing that of humans, and therefore deserving rights, have not materialized and are based on a flawed understanding of human cognition.

The authors emphasize that humans are not merely sophisticated machines, but rather complex, ever-evolving, embodied entities living within social, cultural, and historical contexts, emphasizing the need to focus AI ethics discussions on the societal impact of AI technologies rather than on speculative notions of AI personhood.

Their main argument is that AI ethics should be more concerned with preventing the potential oppressive use of AI technology against vulnerable groups, rather than focusing on the issue of robot rights.

Snippets

Did you miss our last newsletter? Check it out here!

You've got to get up every morning with determination if you're going to go to bed with satisfaction.— George Lorimer

  • 🦸🏽‍♂️ Salley- is an AI-powered personalized learning coach that offers tailored coaching and training programs to users.

  • 🧑🏽‍🏫 Twee- is an AI-powered tool that simplifies lesson planning for English teachers.

  • 📱 Binko- is an advanced language translation and chat app that breaks down language barriers.

If you liked this edition, please share it with a friend or two! You could get some exclusive content in your inbox or even some merch very soon👇🏾

And that's a wrap!

Did we miss anything? Or want to say hey? Hit reply - I'd love to hear from you! You can

And if you haven’t already, sign up to get this in your inbox daily.