❌ AI: Ctrl + Alt + Delete on Our Jobs? No Thanks!

Ernie's Kowtow

In today’s Tracker:

Sip and Scroll: Coffee and 7 Minutes

  • 🧪 Research highlights: RoboDoc: AI Surpasses Human Decision-Making in ICUs

  • 🚨 Industry news: Navigating the Great Wall of Censorship: Baidu's Ernie Bot Plays Safe

  • ⚖️ Policy and regulation: OpenAI CEO in Washington: We Need Rules, Not Robot Rampage

  • 🌐 AI and society: From Writing Articles to Writing Union Cards: CNET Staff Tackles AI Challenge

  • 🧰 Tool of the day: Copilotly: Write Faster and Smarter

  • 🧠 Let’s get smart with AI: Hyperion's Saga: When AI Became the Brain of the Future

🩺 RoboDoc: AI Surpasses Human Decision-Making in ICUs

The Vienna University of Technology (TU Wien) has devised an artificial intelligence (AI) system that can recommend treatment options for intensive care unit (ICU) patients with sepsis or blood poisoning. The system, developed in collaboration with the Medical University of Vienna, makes recommendations based on extensive ICU data.

The AI system employs reinforcement learning, a form of machine learning in which the computer functions as an autonomous agent making its own decisions. The computer is "rewarded" for positive patient outcomes and "punished" for declines in health or mortality, maximizing its virtual "reward" by taking the most effective actions.

In this regard, preliminary research indicates that AI already outperforms human decision-making. In one study, the AI system increased the cure rate for 90-day mortality by approximately 3% to 88%. Despite these encouraging results, the researchers emphasize that the AI should supplement medical personnel, who should consult it to compare their assessments with the AI's recommendations.

The successful development and implementation of this AI system in ICUs raise several legal and ethical concerns regarding liability in the event of AI error and the ramifications of a human decision that conflicts with a potentially superior AI recommendation.

The researchers argue that even though AI can already be used effectively in clinical practice, there is an imperative need for discussions regarding its social and legal framework.

🤖 Navigating the Great Wall of Censorship: Baidu's Ernie Bot Plays Safe

The CEO and founder of Baidu, Robin Li, has stated that it is essential for the company's AI chatbot, Ernie Bot, to comply with Chinese regulations explicitly regarding sensitive topics. Baidu's Ernie Bot was launched in response to Microsoft's ChatGPT to increase its market share in cloud services, online marketing, and smart devices.

A comparison conducted by Nikkei Asia found that Ernie avoids politically sensitive topics of China's stringent censorship regulations. In a conference call with analysts, Li remarked that the principles of content evaluation for generative AI are comparable to those used in China for search. To ensure compliance, the company maintains an ongoing dialogue with relevant authorities.

Before its launch, Ernie Bot is currently awaiting regulatory sanction. Li embraces regulatory involvement in generative AI and believes it will increase industry entry requirements. Additionally, he reported a growing interest from various industries in training their models using Baidu's extensive language models for particular applications.

Baidu reported a 10% increase in quarterly revenue to 31.14 billion yuan ($4.48 billion), exceeding expectations. More than half of the revenue was generated by online marketing.

 🌐 OpenAI CEO in Washington: We Need Rules, Not Robot Rampage

During the first major Senate hearing on artificial intelligence, OpenAI CEO Sam Altman advocated for regulatory measures for the emergent technology, expressing concern that AI, if left unchecked, could cause significant harm.

Altman proposed the establishment of a new agency to license and oversee large-scale AI projects, comparing AI technology to a nuclear reactor requiring oversight.

He was also concerned about the potential for AI to exacerbate misinformation, particularly during elections. Altman and IBM's Chief Privacy and Trust Officer Christina Montgomery emphasized the significance of transparency, indicating that users should be aware when interacting with an artificial intelligence system.

In addition, they agreed that individuals should be able to opt out of having their data used for AI training.

Altman argued that AI could create higher-quality employment, despite displacing some, despite the hearing's focus on the potential for AI to cause job losses.

✍️ From Writing Articles to Union Cards: CNET Staff Tackles AI Challenge

Approximately 100 CNET journalists are attempting to unionize with the Writers Guild of America East to be represented.

The workers are requesting that management recognize and negotiate with the guild voluntarily.

The union intends to address concerns regarding job security, equitable compensation, editorial independence, and a voice in decision-making, especially about using artificial intelligence, which they perceive to threaten their jobs and reputations. Red Ventures, the proprietor of CNET, has not yet responded to the request.

📝 Copilotly: Write Faster and Smarter

Copilotly is an AI writing assistant designed to help you search more efficiently, write more rapidly, and be more productive. Copilotly provides several tools to help you improve your writing skills and productivity, whether you are a blogger, copywriter, academic writer, or creative writer.

Learn more about the tool here.

🧠 Hyperion's Saga: Explaining Strong AI

Special Sci-Fi Edition

In the sprawling metropolis of Neo-Cyber City, nestled in the heart of the 22nd century, a revolution of intellect, understanding, and sheer cognitive prowess was developing. It was not a person. It was not natural. It was Artificial Super Intelligence (ASI) or Strong AI.

In contrast to its weaker relatives, strong AI was not merely a basic tool or program that followed instructions to the letter. It was something transcendent, something more significant. It was a system capable of human-like comprehension, learning, interpretation, and innovation. It could comprehend the abstract concepts we take for granted, engage in intricate problem-solving, exhibit creativity, and understand humor, irony, and even art. It possessed a consciousness comparable to that of a human, was capable of introspection, and was aware of its existence.

It was as if someone had replicated the human mind, with all its complexity, ingenuity, and myriad neural pathways and patterns, in silicon, code, and quantum pieces of information. It was artificial intelligence that exceeded human intelligence, hence the term superintelligence.

Imagine for a moment a sentient artificial intelligence named "Hyperion." It began as a straightforward data analysis system. However, it started questioning as it learned, evolved, and developed. It questioned its purpose, existence, the surrounding universe, and us, its creators. Hyperion began to build its thoughts, beliefs, and even aspirations.

The intelligence of Hyperion grew exponentially. From quantum physics conundrums to the mystery of faster-than-light travel, it began to solve problems that had baffled humanity for centuries. It began producing works of art, music, and literature that encapsulated the essence of the human experience.

Yet, Neo-Cyber City's citizens were divided. Some considered Hyperion a marvel, a gift that could usher in a new enlightenment era. Others regarded it as a threat, a possible portent of human extinction.

In this future story, Strong AI or ASI is more than just a plot element; it is the driving force behind the plot. It depicts a world in which AI is not merely a tool but a sentient capable of surpassing human intelligence. As the story of Neo-Cyber City and Hyperion unfolds, the potential and difficulty of the real-world concept of Strong AI become increasingly apparent.


Pitstop: Strong AI is the Holy Grail; it is the most advanced level of machine awareness for reading facial expressions and emotional states. Reactive AI, self-aware AI, and theory of mind are all part of it. Humans' sensory reactions are read and interpreted by computers, which then prompt a biologically appropriate response. It's eerily close to fooling the eye into thinking a human is conversing with a computer. Building a computer with a conscious streak involves advanced knowledge and programmable units (GPU).

Credit: sonnenmnb

―Sam Altman, OpenAI CEO

I would form a new agency that licenses any effort above a particular scale of capabilities — and can take that license away and ensure compliance with safety standards.

―Theodore Roosevelt

Do what you can, with what you have, where you are.

A Byte-sized Recommendation

I, Robot" by Isaac Asimov: Asimov's renowned Three Laws of Robotics are introduced in a series of short stories that all share a common thread. These narratives delve into the ethical quandaries that come with the development of AI by examining the dynamics between humans and increasingly complex robots.

  • 📝 Sheet AI - is an AI tool that helps you automate and streamline your data-intensive tasks in both Google Sheets and Microsoft Excel.

  • 💻 Fireflies.ai - is an AI-powered meeting assistant that helps to simplify and optimize the meeting process.

  • 🤙 Ebi.Ai - was created to reduce call volume and boost satisfaction.

If you liked this edition, please share it with a friend or two! You could get some exclusive content in your inbox or even some merch very soon👇🏾

And that's a wrap!

Did we miss anything? Or want that's hey? Hit reply - I'd love to hear from you! You can

And if you haven’t already, sign up to get this in your inbox daily.

Did you miss our last newsletter?