• AI Tool Tracker
  • Posts
  • πŸ™πŸ» In AI We Trust? US Commerce Department Launches Inquiry Into AI Systems βš™οΈ

πŸ™πŸ» In AI We Trust? US Commerce Department Launches Inquiry Into AI Systems βš™οΈ

The Cambrian Explosion of AI: Enabling and Terrifying

In today’s email:

Read time: 4.5 mins

  • πŸ§ͺ Research highlights: Serving Up the Science: How Our Brains React to Table Tennis Opponents

  • 🚨 Industry news: Bugging the Bots: OpenAI Offers Cash Rewards for Vulnerability Reports

  • βš–οΈ Policy and regulation: AI Oversight Under the Microscope: US Inquiry to Produce Policy Recommendations

  • 🌐 AI and society: AI's Emergent Abilities: A Source of Excitement and Anxiety

  • 🧰 Tool of the day: Promptly: Get Your Creativity Flowing with Predefined Prompts

  • ⏰ AI concept of the day: From Strings of Text to Meaningful Insights: The Power of Entity Extraction, the Way Feynman Would Explain It

🧠 Serving Up the Science: How Our Brains React to Table Tennis Opponents πŸ“

The brains of table tennis players react differently to human opponents than machine opponents, according to researchers at the University of Florida.

The researchers examined dozens of hours of play against human players and ball-serving machines. They discovered that players' neurons aligned with one another when playing against humans but not when playing against machines, resulting in desynchronization.

The findings imply that training with a machine may not provide the same experience as playing against a real opponent. As robots become more common, understanding our brains' reactions to these differences may aid in making our artificial companions more naturalistic.

πŸ’° Bugging the Bots: OpenAI Offers Cash Rewards for Vulnerability Reports

OpenAI has launched a bug bounty program to find vulnerabilities in its artificial intelligence systems, including its ChatGPT chatbot, and will pay up to $20,000 to people who report problems.

The program, developed in collaboration with Bugcrowd, will pay participants between $200 and $20,000 for low-severity discoveries.

The move is intended to develop "safe and advanced AI," and OpenAI believes that transparency and collaboration are critical to that goal.

The Bugcrowd page lists several issues that are not eligible for rewards, such as jailbreak prompts and queries that result in an AI model saying inappropriate things to a user.

πŸ”¬ AI Oversight Under the Microscope: US Inquiry to Produce Policy Recommendations πŸ¦…

The National Telecommunications and Information Administration (NTIA) of the US Commerce Department is investigating how companies and regulators can ensure that artificial intelligence (AI) systems are trustworthy, legal, and ethical.

The investigation will look into the best methods for auditing artificial intelligence systems and eventually produce policy recommendations for the White House and Congress. The inquiry is open to input from businesses, civil society organizations, researchers, and the general public.

As chatbots like ChatGPT become more popular, the scrutiny surrounding AI systems has grown. Without proper oversight, critics argue, AI systems can perpetuate real-world biases, confuse and deceive consumers, spread misinformation, and violate pre-existing laws.

The latest US government initiative to set standards around AI is the NTIA's effort, which follows previous initiatives, including the "AI Bill of Rights" and voluntary AI guidance from the National Institute of Standards and Technology.

πŸ˜ƒ AI's Emergent Abilities: A Source of Excitement and Anxiety 😟

Recent advances in foundation models based on deep neural networks and self-supervised learning have resulted in an explosion of AI capable of creating anything from videos to proteins to code with uncanny fidelity.

This has lowered the barrier to creating novel artifacts and raised concerns about losing our ability to distinguish between what is real and what is not.

Furthermore, the homogenization of foundation models creates a single point of failure that can cause harm to a plethora of downstream applications. It is critical to benchmark these models to better understand their capabilities and limitations, guide policymaking, and address concerns about their emergent behavior and potential risks.

The HELM (Holistic Evaluation of Language Models) project was created to benchmark over 30 prominent language models against various scenarios and metrics to elucidate their capabilities and risks, with the community encouraged to contribute.

PS: The Cambrian Period is significant in the history of life on Earth because it is when most of the major animal groups first appear in the fossil record. Because of the short period over which this diversity of forms occurs, this event is sometimes called the "Cambrian Explosion.

✍️ Promptly: Get Your Creativity Flowing with Predefined Prompts

Promptly is an AI writing program that helps people create high-quality material swiftly. The application gives predetermined prompts and ideas to help users through the writing process, ensuring they generate compelling material.

Learn more about the tool here.

✨ From Strings of Text to Meaningful Insights: The Power of Entity Extraction, the Way Feynman Would Explain It

Well, you see, entity extraction is a technique used in natural language processing. It involves automatically identifying important information like names, dates, locations, and organizations from unstructured text data.

The idea is to transform this data into a structured format that machines can quickly analyze and process. This is important because it allows us to make sense of large amounts of unstructured text data.

To perform entity extraction, we use a variety of methods like rule-based systems, statistical models, and machine learning algorithms.

Essentially, we're teaching machines to recognize and understand the important entities within text data.

PS: Richard Feynman, a Nobel Prize-winning physicist, invented the Feynman Technique. Feynman was well-known for his ability to explain complex concepts in simple terms, and his technique has since become a popular learning strategy used by people worldwide.

🐦 #SocialScene

πŸ‘” Expert advice of the day

β€”David Talby, CTO, John Snow Labs

We can expect to see a few major AI trends in 2023, and two to watch are responsible AI and generative AI. Responsible or ethical AI has been a hot-button topic, but we’ll see it move from concept to practice next year. Smarter technology and emerging legal frameworks around AI are also steps in the right direction. The AI Act, for example, is a proposed, first-of-its-kind European law set forth to govern the risk of AI use cases. Similar to GDPR for data usage, The AI Act could become a baseline standard for responsible AI and aims to become law next Spring. This will have an impact on companies using AI worldwide.

🧠 #MindsetMaze

β€”J. K. Rowling

It is our choices that show what we truly are, far more than our abilities.

πŸ€ͺ Meme of the day

Source: chatgpttricks

πŸ”₯ TRENDING TOOLS πŸ”₯

  • ✍🏾 WriteSonic - an online writing platform that provides various tools to help writers create, share, and publish their work.

  • πŸ’» Tavus - Tavus is pioneering immersive, hyper-personalized videos that look and sound exactly like you - and scale like no other platform. Give your audience an experience to remember with real-time videos made just for them.

  • πŸ“ ContentFries - is a content marketing platform that helps businesses and entrepreneurs create, optimize, and manage content campaigns.

If you liked this edition, please share it with a friend or two! You could get some exclusive content in your inbox or even some merch very soonπŸ‘‡πŸΎ

And that's a wrap!

Did we miss anything? Or want to say hey? Hit reply - I'd love to hear from you! You can

And if you haven’t already, sign up to get this in your inbox daily.