• AI Tool Tracker
  • Posts
  • 🔊🦓 When I Move You Move…Just Like That…Hey AI Bring That Back 🦒🐮

🔊🦓 When I Move You Move…Just Like That…Hey AI Bring That Back 🦒🐮

Large Language Models and the Economy

In today’s email:

  • 🔬 Research highlights: I Like the Way You Move…I Like That…Say AI

  • 🚨 Industry news: OrQA- The Facial Recognition Software for Organs

  • ⏰ AI Startup Idea of the Day: SunnyAI, Inc.

  • 🌐 AI and society: From Words to Wealth- How AI Language Models Can Inform Economic Insights

🎧 I Like the Way You Move…I Like That…Say AI

Let's pretend for a second that we're watching a zebra graze while on safari. The animal looks away for a moment, and then we see it stoop its head and sit down. But in the interim, what transpired? University of Konstanz computer scientists at the Centre for the Advanced Research of Collective Behaviour have developed a method of encoding an animal's position and appearance to display the intermediate motions statistically likely to have occurred.

The complexity of images is a major hurdle for computer vision systems. There is a dizzying variety of stances that a giraffe can strike. While it may not pose much of an issue on a safari, the ability to reconstruct entire motion sequences is often essential for studying group behavior. Here's where the new "neural puppeteer" model from the field of computer science comes into play.

Bastian Goldlücke, a professor of computer vision at the University of Konstanz, notes that one goal in computer vision is to characterize the enormously complicated space of images by encoding as few characteristics as feasible. The skeleton has been a common symbol up until this point. In a recent article presented at the 16th Asian Conference on Computer Vision, Bastian Goldlücke, Urs Waldmann, and Simon Giebenhain describe a neural network model that represents motion sequences and renders the whole appearance of animals from any viewpoint using only a few key points. The 3D perspective allows for greater flexibility and accuracy than previous skeletal models.

The objective was to be able to forecast 3D critical locations and also to be able to follow them independently of texture, explains doctoral researcher Urs Waldmann. Thus, they developed an AI system that uses 3D key points to forecast silhouette photos captured from any angle. Skeletal points can also be calculated backward, from silhouettes to photos. The AI system can determine the most likely next moves based on the most important information. The silhouette of a person is sometimes crucial. This is because it would be impossible to tell whether the animal you were looking at was relatively large or close to hunger if you worked with skeletal points.

The ultimate aim is to implement the system on as much data as possible relating to wild animals.

They began by trying to anticipate the movements of human, pigeon, giraffe, and cow silhouettes. According to Waldmann, human subjects are frequently employed as test cases in computer science. His pigeon-working colleagues are members of the Cluster of Excellence. Its delicate claws, however, can be difficult to handle. While there was sufficient cow data for models, Waldmann was keen to tackle the giraffe's long neck. Silhouettes were created by the team based on a small number of points, anywhere from 19 to 33.

The time has come for computer scientists to put their work into practice: In the future, data on insects and birds will be collected in the University of Konstanz's Imaging Hanger, its largest laboratory for the study of collective behavior. Environment factors like lighting and background can be more easily managed in the Imaging Hangar than in the field. But eventually, we hope to train the model on data from as many different kinds of wild animals as possible so that we can learn more about their habits.

OrQA- The Facial Recognition Software for Organs 🧠

With the advent of a novel method for gauging the quality of donated organs, the transplantation system stands to gain a revolutionary boost that might save hundreds of lives and thousands of pounds.

The National Institute of Health and Care Research (NIHR) has invested over $1 million in a brand-new piece of equipment called Organ Quality Assessment (OrQA). Utilizing AI in this way to evaluate an organ's health is analogous to the way in which facial recognition software uses AI.

Up to 100 additional liver transplants and 200 additional kidney transplants might be performed each year in the United Kingdom if this technology were widely used.

The ultimate objective of OrQA is to promote patient access to life-saving organ transplantation, which has been shown to boost health outcomes and lengthen patient life expectancy.

We're building a deep ML system that will be trained with hundreds of images of human organs so it can assess donor organ pics more accurately than humans can.

In the near future, surgeons will be able to snap a photo of a given organ, upload it to OrQA, and get advice on how best to use it.

☀️ SunnyAI, Inc.

The output of solar panels can be predicted using weather data and other factors with the help of this artificial intelligence-powered program. This technique could aid power firms in making more informed decisions about energy generation and distribution by providing more precise predictions of solar energy production. Those with solar panels on their property might also use the app to cut down on their energy consumption and save money.

Would you put your bottom dollar into this startup?

Disclaimer: This startup does not exist; any match to an existing entity is completely coincidental.

💸 From Words to Wealth- How AI Language Models Can Inform Economic Insights 📈

A veteran scholar in the field of artificial intelligence has said that economists should embrace the coming large-scale language model revolution as "a new wild west" for innovation and experimentation. Experts in the field of economics believe that large language models (LLMs) like OpenAI's ChatGPT can provide economists with new insights and should be embraced for their inventiveness and opportunity for experimentation.

According to economist César A. Hidalgo, "there is plenty that economics can learn from LLMs, not by speaking with the LLMs, but by dissecting how they work." He is a Professor at the Corvinus University Institute for Advanced Studies and the Université of Toulouse, Centre for Collective Learning in Artificial and Natural Intelligence (ANITI). After all, LLMs are constructed using mathematical principles robust enough to facilitate language simulation. Insight into the mechanics of such models could provide economists with fresh ideas.

The first step in Hidalgo's explanation of how these models function is to introduce a simple language-generating model based on the frequency with which one word is followed by another in a big body of text. Bigrams and 2-grams are the names given to these sets of characters. While this approach may not produce grammatically correct sentences, it can nevertheless recognize that "brown dog" is a more common bigram than "dog brown" because adjectives usually come before nouns in English.

However, LLMs use n-grams to estimate the likelihood of a word depending on the likelihood of its preceding words or tokens. It's important to note that the matrices can grow quite large as the number of words and n-grams grows. More data may be held in 18 grams of a 10,000-word corpus than in all of Earth's atoms combined. This includes 100 million 2-grams, a trillion 3-grams, and 1,072 combinations.

With the use of neural networks, LLMs may estimate a function that describes these word sequences with a small number of parameters, solving the problem. Even if LLMs have near-trillion parameters, that's nothing compared to Borges' library's limitless n-grams. These results have the potential to improve the quality of LLMs for use in a wide range of natural language processing applications.

The eventual result is models that can pass as human knowing. Thus according to Hidalgo, LLMs "know" that tea and coffee are interchangeable since they have learned that these terms are frequently found in the same contexts. The representations required to construct language are created by these models, which represent words not as independent things but as nodes in networks.

The economy is complex. Therefore, we need a sophisticated approach. In the same way that multiple words can interact in a text to create a more profound meaning, different individuals and resources in an economy create a more complex whole. Although they can be sorted into standard categories like “capital” and “labor” or “agriculture,” “service,” and “manufacturing,” these divisions do not do justice to the true complexity of the economy.

Similar to how models of language built around the concepts of nouns, verbs, and grammar have limitations, models of the economy built around broad categories of economic activity are also inadequate. Due to the economy's complexity, a simplified strategy that ignores interconnections between factors would not suffice.

"What LLMs teach us is that there is a limit to our ability to capture the nuance of the universe using established categories and deductive logic," explains Hidalgo. To get down to the subtleties, we need a set of mathematical tools to help us capture systems more precisely.

It is imperative that economists and computer scientists alike embrace this revolutionary shift in methodology. Creativity and experimentation have entered a new frontier.

Meme of the day

🔥 TRENDING TOOLS 🔥

  • 🧠 Copy.ai - Brainstorm topics, for your creative prompts, and keyword generator

  • ✍️ Grammarly - Correct grammar and writing mistakes

  • 📋 HypotenuseAI - Let AI write your content

  • ✂️ Upword.ai - Summarize long-form content to make it more understandable to you